Plan behavioral guidelines
AI agents work differently from rule-based systems. To ensure a successful implementation, understand how their behavior differs, what you can and cannot control, and how to define the right boundaries.
The following sections explain the key considerations.
Predictability considerations
Rule-based systems:
- Predefined and predictable
- Fully controlled
- You know exactly how the system will respond in every scenario
These systems behave exactly as predefined. If an end user deviates from the intended interaction path, the system cannot adapt.
AI agents:
- Probabilistic and adaptive
- Context-driven
- The agent interprets the situation and chooses an appropriate response based on the information it has
Unlike rule‑based systems, AI agents can interpret unexpected inputs and adjust in real time. This adaptability allows the agent to manage unpredicted scenarios effectively by adjusting dynamically to the end user and to the ongoing conversation. However, because of this dynamic adjustment, you cannot predict or control some aspects of agent behavior in advance.
The following section details specific aspects of agent behavior that you cannot control.
Agent behavior that you cannot control
-
Exact phrasing control is limited
The agent generates responses dynamically based on context. If you need exact wording, such as for legal disclaimers or compliance statements, those should be written into your application, not generated by the agent.
-
Synonyms and rewording will happen
The agent understands intent. Example: "I want to cancel my order" and "Can you stop my shipment?" might be managed the same way.
You cannot script exact responses.
-
The agent may exceed its intended scope
If you do not clearly restrict a behavior, the agent might try it if it seems reasonable.
-
You cannot control every individual message
Unlike rule-based chatbots, where you can review and approve every possible response, AI agent systems generate responses dynamically. You can define the guidelines, boundaries, and behaviors, but not exact responses.
Behavioral rules and guardrails
To work within these control limitations, define explicit behavioral guidelines:
Capability boundaries: Define clear limits on what the agent can and cannot do.
- What the agent CAN do: Be specific and detailed
- What it CANNOT do: Be comprehensive and unambiguous
- Prohibited actions: Never promise unavailable capabilities
Mandatory restrictions:
-
Never offer actions that depend on unavailable tools or capabilities
Example: If you describe a refund tool in the system prompt but do not add the tool to your agent, the agent might still say "I can process your refund now."
-
Always confirm before modifying end user data
-
Never share personally identifiable information
Communication style:
- Tone of voice. Example: professional, friendly, casual
- Brand voice
- Language preferences
- Level of formality
Safety and compliance:
- Legal disclaimers
- Privacy requirements
- Industry-specific regulations
- When to escalate to a human agent
If you do not explicitly define these constraints, the agent will make assumptions. It might claim capabilities that it does not have, take actions that were not intended, or generate responses that violate guidelines.
Balancing control and adaptability
When you use AI agents, you exchange strict, word-for-word control for adaptability and intelligence. The agent can manage unexpected situations more effectively, but it also requires well-defined boundaries and behavioral rules to ensure it stays within the intended scope.
When implementing your agent, include these guidelines in your system prompt. For examples and best practices, refer to Write prompts for AI agents.
Next steps
After defining your behavioral guidelines:
- Build your agent: Follow the Create and publish workflow to configure your agent with these guidelines
- Learn prompt best practices: See Write prompts for AI agents for detailed guidance on implementing behavioral guidelines