OpenClaw Configuration Deep Dive: Every Setting Explained
OpenClaw is a powerful AI agent framework, and its flexibility comes from a rich set of configuration options. Whether you are deploying your first agent or tuning an existing one for peak performance, understanding every available setting gives you the ability to shape your agent's behavior precisely.
This guide covers every major OpenClaw configuration category: core settings, model parameters, system prompt design, skill configuration, integration settings, performance tuning, and security options. We will explain what each setting does, when to change it, and what values work best for common use cases.
If you are new to OpenClaw and EZClaws, start with the deployment tutorial first, then come back here for advanced configuration.
Core Agent Settings
These are the fundamental settings that define what your agent is and how it operates.
Display Name
What it does: Sets the human-readable name shown on your EZClaws dashboard and in some integration contexts.
Best practice: Use a descriptive name that helps you identify the agent's purpose at a glance. "Customer Support - Main" is more useful than "Agent 1."
When to change: Anytime. This is a cosmetic setting with no impact on behavior.
Model Provider
What it does: Determines which AI model service your agent uses to generate responses. Current options include OpenAI, Anthropic, and Google.
Best practice: Choose based on your use case:
- OpenAI - Best ecosystem support, widest model range from GPT-4o-mini to GPT-4
- Anthropic - Best instruction following, lower hallucination rates with Claude models
- Google - Competitive pricing with Gemini models, good for budget-conscious deployments
See our model comparison guide for detailed benchmarks.
When to change: Usually set once during deployment. Changing requires a restart.
Model Selection
What it does: Specifies which specific model from the chosen provider to use.
Common options:
- OpenAI:
gpt-4,gpt-4o,gpt-4o-mini,gpt-4-turbo - Anthropic:
claude-3-opus,claude-3-sonnet,claude-3-haiku,claude-3.5-sonnet - Google:
gemini-pro,gemini-pro-1.5
Best practice: Start with a mid-tier model (GPT-4o-mini, Claude Sonnet, Gemini Pro) for development and testing. Upgrade to a higher-tier model only if the quality justifies the cost increase.
When to change: Experiment with different models during development. Switching models in production should be tested first.
API Key
What it does: Authenticates your agent with the model provider's API.
Best practice: Use a dedicated API key for each agent rather than sharing keys across multiple services. This makes it easier to track costs and revoke access if needed. See our API keys guide for security best practices.
When to change: When rotating keys for security, or when switching to a different API account.
System Prompt Configuration
The system prompt is the most impactful configuration setting. It defines everything about how your agent behaves.
Structure of an Effective System Prompt
A well-structured system prompt has these sections:
Identity Block
Tell the agent who it is:
You are Nova, a customer support agent for NovaPack Supply Co. You help
customers with order tracking, product questions, returns, and general inquiries.
Why it matters: A clear identity prevents the agent from acting confused about its role or giving generic responses.
Behavioral Rules
Define explicit rules for how the agent should act:
Rules:
- Always greet the customer by name if available
- Keep responses under 200 words unless the customer asks for details
- Confirm understanding before taking any action
- Never share information about other customers
- If asked about competitors, redirect to our product advantages without disparaging them
Why it matters: Without explicit rules, the agent defaults to generic language model behavior, which may not align with your brand or use case.
Response Format Guidelines
Specify how responses should be structured:
Response format:
- Use short paragraphs (2-3 sentences maximum)
- Use bullet points for lists of options or steps
- Bold important information like order numbers and tracking links
- End every response with a clear next step or question
Why it matters: Consistent formatting improves readability and user experience, especially on mobile platforms.
Knowledge Boundaries
Tell the agent what it knows and does not know:
Knowledge boundaries:
- You know about NovaPack products, pricing, shipping, and return policies
- You can check order status using the order lookup skill
- You do NOT know about competitor products in detail
- You do NOT have access to internal business metrics
- If asked about something outside your knowledge, say "I don't have information about that, but I can connect you with our team"
Why it matters: Clear boundaries reduce hallucination and set appropriate user expectations.
Escalation Criteria
Define when the agent should hand off to a human:
Escalation:
- Escalate immediately if the customer expresses strong frustration or anger
- Escalate if the issue involves a billing dispute over $100
- Escalate if you have attempted to resolve the issue twice without success
- Escalate if the customer explicitly asks for a human agent
- When escalating, summarize the conversation and the customer's issue
Why it matters: Good escalation criteria ensure the agent does not frustrate users by persisting on issues it cannot resolve.
System Prompt Length
Recommendation: Between 200 and 800 words. Shorter prompts lack specificity. Longer prompts may cause the agent to miss important instructions buried in the text.
Tip: Organize your prompt with clear sections and use formatting (bullet points, headers) to make it scannable. Language models respond well to well-organized instructions just like humans do.
Dynamic vs Static Prompts
Static prompts stay the same for every conversation. This is the default and works well for most use cases.
Dynamic prompts can include variables that change based on context (time of day, user attributes, conversation channel). This requires custom skill development but can significantly personalize the agent experience.
Model Parameters
These settings control how the language model generates responses.
Temperature
What it does: Controls randomness in responses. Range is 0.0 to 2.0.
- 0.0-0.3: Very deterministic. The agent gives nearly identical responses to the same question every time. Best for factual queries, customer support, and tasks where consistency is critical.
- 0.4-0.7: Balanced. Some variation in responses while maintaining accuracy. Good default for most use cases.
- 0.8-1.2: Creative. More varied and unpredictable responses. Good for brainstorming, content creation, and creative writing.
- 1.3+: Highly random. Can produce unusual or incoherent responses. Rarely recommended.
Default: 0.7
Best practice: Start at 0.5 for support agents, 0.7 for general-purpose agents, 0.9 for creative agents. Adjust based on testing.
Max Tokens
What it does: Sets the maximum length of the agent's response in tokens (roughly 1 token = 0.75 words).
- 256 tokens (~190 words): Good for concise, mobile-friendly responses
- 512 tokens (~380 words): Balanced for most use cases
- 1024 tokens (~760 words): Detailed responses
- 2048+ tokens: Long-form responses for complex explanations
Best practice: Set this based on your platform. WhatsApp and Telegram users expect shorter responses (256-512). Web chat users tolerate longer responses (512-1024). If the agent consistently hits the token limit, increase it or add instructions to be more concise.
Top P (Nucleus Sampling)
What it does: An alternative to temperature for controlling response diversity. Range is 0.0 to 1.0. Lower values make the agent more focused; higher values make it more diverse.
Default: 1.0
Best practice: Usually leave this at the default and use temperature instead. Adjusting both simultaneously can produce unpredictable results.
Frequency Penalty
What it does: Reduces the likelihood of the agent repeating the same words or phrases. Range is 0.0 to 2.0.
Default: 0.0
Best practice: Set to 0.3-0.5 if you notice the agent being repetitive. Higher values may cause it to use unusual synonyms.
Presence Penalty
What it does: Encourages the agent to introduce new topics rather than staying on the same subject. Range is 0.0 to 2.0.
Default: 0.0
Best practice: Set to 0.3-0.5 for agents that need to cover diverse topics. Keep at 0.0 for focused, single-topic agents like customer support.
Conversation Management
These settings control how the agent handles conversation history and context.
Context Window Size
What it does: Determines how many previous messages are included when generating a response. The agent needs conversation history for context, but more history means more token consumption.
Options:
- 5 messages: Minimal context. Good for simple Q&A where each message is independent.
- 10 messages: Standard. Adequate for most conversations.
- 20 messages: Extended. Good for complex, multi-turn problem solving.
- Unlimited: Includes full conversation history. Most expensive and may hit model context limits.
Best practice: Start with 10 messages. Increase if users report the agent "forgetting" things from earlier in the conversation. Decrease if you need to reduce costs.
Conversation Timeout
What it does: Automatically resets the conversation context after a period of inactivity. This prevents the agent from using stale context and reduces token consumption.
Options: Typically 15 minutes to 24 hours.
Best practice: 30 minutes for support agents, 1 hour for general-purpose agents. Very short timeouts frustrate users who step away briefly.
Conversation Isolation
What it does: Determines whether conversations are isolated by user, by channel, or shared.
- Per-user: Each user has their own conversation context. Messages from one user do not affect another's context. Standard for support agents.
- Per-channel: All users in a channel share the same conversation context. Useful for Discord or Slack channels where the agent serves a group.
- Shared: All conversations share context. Rarely recommended.
Best practice: Use per-user isolation for support and personal assistant agents. Use per-channel isolation for Discord and Slack community bots.
Integration Settings
These configure how your agent connects to external platforms.
Telegram Integration
Required setting: Telegram Bot Token
Optional settings:
- Allowed chat IDs (restrict which chats the agent responds in)
- Admin user IDs (users who can send admin commands)
- Parse mode (HTML or Markdown for formatted responses)
See our guides for platform-specific deep dives: WhatsApp, Discord.
Webhook Configuration
What it does: Configures external webhook URLs for receiving events or integrating with third-party services.
Settings:
- Webhook URL
- Authentication headers
- Event types to forward
- Retry policy
Admin Secret
What it does: A password hash that protects administrative endpoints on your agent. Required for accessing admin features through the API.
Best practice: Use a strong, unique secret. Never share it. EZClaws hashes the secret before storing it.
Skills Configuration
Skills are modular capabilities added to your agent through the Skills Marketplace.
Skill Priority
What it does: When multiple skills could handle a request, priority determines which one runs first.
Best practice: Put the most specific skills at higher priority. A "Shopify Order Lookup" skill should have higher priority than a generic "Web Search" skill for order-related queries.
Skill Environment Variables
What it does: Some skills require their own configuration, like API keys for external services.
Best practice: Use the EZClaws dashboard to set skill-specific environment variables securely. Never hardcode credentials.
Skill Chaining
What it does: Allows the output of one skill to be used as input to another.
Example: A "Language Detection" skill detects the user's language, and a "Translation" skill uses that output to translate the response.
Best practice: Keep chains short (2-3 skills maximum) to avoid latency. Test chains thoroughly, as errors in one skill can cascade.
For building your own skills, see the skills development guide.
Performance Tuning
These settings optimize your agent's speed and cost efficiency.
Response Streaming
What it does: Sends the response word-by-word as it is generated instead of waiting for the complete response. Creates a more responsive feel for the user.
Best practice: Enable for web chat and Telegram. Some platforms do not support streaming, in which case the full response is sent once complete.
Request Timeout
What it does: Maximum time allowed for a single response generation. If the model provider takes longer than this, the request is cancelled.
Default: 30 seconds
Best practice: 30 seconds for most use cases. Increase to 60 seconds if you are using a slower model or have complex skill chains. Decrease to 15 seconds if fast response time is critical and you prefer a timeout over a slow response.
Rate Limiting
What it does: Limits how many requests your agent processes per time period. Protects against abuse and unexpected cost spikes.
Settings:
- Requests per minute per user
- Requests per minute total
- Burst allowance
Best practice: Start with 10 requests per minute per user and 100 requests per minute total. Adjust based on actual usage patterns.
Caching
What it does: Caches identical requests to avoid redundant model API calls. If the same question is asked within the cache window, the cached response is returned instantly.
Best practice: Enable for FAQ-type agents where the same questions are asked frequently. Disable for agents where personalized responses are important or where information changes frequently.
Security Settings
API Key Encryption
EZClaws encrypts all API keys at rest. This is automatic and not configurable. Your keys are never exposed in logs, dashboards, or API responses.
Input Sanitization
What it does: Filters or sanitizes user input before processing. Helps prevent prompt injection attacks.
Best practice: Enable for any customer-facing agent. The default sanitization catches most common injection attempts.
Output Filtering
What it does: Scans agent responses for sensitive content (PII, profanity, off-brand content) before sending to the user.
Best practice: Enable for customer-facing agents, especially in regulated industries. Configure the filter to match your content policies.
Logging Level
What it does: Controls how much detail is captured in agent logs.
- Minimal: Status changes and errors only
- Standard: Status changes, errors, and request metadata
- Verbose: Full request and response content (useful for debugging, but captures user data)
Best practice: Use standard for production. Switch to verbose only when debugging specific issues, and switch back afterward. Be mindful of data privacy regulations when using verbose logging.
Configuration Recipes
Here are complete configuration templates for common use cases.
Customer Support Agent
Model: Claude Sonnet (Anthropic)
Temperature: 0.3
Max Tokens: 512
Context Window: 10 messages
Conversation Timeout: 30 minutes
Rate Limit: 10 req/min per user
Streaming: Enabled
Input Sanitization: Enabled
Output Filtering: Enabled
Community Discord Bot
Model: GPT-4o-mini (OpenAI)
Temperature: 0.7
Max Tokens: 256
Context Window: 5 messages (per-channel)
Conversation Timeout: 15 minutes
Rate Limit: 5 req/min per user
Streaming: Disabled (Discord handles its own display)
Personal Research Assistant
Model: GPT-4 (OpenAI) or Claude Opus (Anthropic)
Temperature: 0.5
Max Tokens: 2048
Context Window: 20 messages
Conversation Timeout: 2 hours
Rate Limit: 20 req/min per user
Streaming: Enabled
Content Creation Agent
Model: GPT-4 (OpenAI)
Temperature: 0.9
Max Tokens: 2048
Context Window: 15 messages
Conversation Timeout: 1 hour
Rate Limit: 10 req/min per user
Frequency Penalty: 0.3
Presence Penalty: 0.5
Applying Configuration Changes
Through the EZClaws Dashboard
- Navigate to your agent's detail page in the dashboard.
- Click the settings or configuration section.
- Modify the desired settings.
- Save changes.
- Some settings take effect immediately. Others require a restart (you will be prompted).
Through Environment Variables
For advanced settings not exposed in the dashboard, you can set environment variables on your agent. This is done through the EZClaws dashboard's environment variable section on your agent's detail page.
Testing Changes
Always test configuration changes in a controlled way:
- Note your current configuration (take a screenshot or write it down).
- Make one change at a time.
- Test with several representative conversations.
- If the change does not improve things, revert.
- Document what worked and what did not.
Conclusion
OpenClaw's configuration system gives you fine-grained control over your agent's behavior, performance, and security. The key is to start simple, measure the impact of each change, and iterate.
The most impactful settings for most users are:
- System prompt - Defines behavior and quality (invest the most time here)
- Model selection - Determines capability and cost
- Temperature - Controls response consistency
- Context window - Balances context quality and cost
- Skills - Extends capabilities beyond conversation
For more guidance, explore our other guides: deployment tutorial, API keys, model comparison, and monitoring.
Frequently Asked Questions
When using EZClaws, you configure most settings through the dashboard on your agent's detail page. Advanced settings can be configured through environment variables or by editing the OpenClaw configuration file directly. The EZClaws dashboard covers the most common settings with a user-friendly interface.
Some settings can be changed while the agent is running, such as the system prompt and skill installations. Other settings, like the model provider or API key, require a restart. EZClaws makes restarting easy with a one-click restart button on the dashboard.
The system prompt is by far the most impactful setting. It defines your agent's personality, knowledge boundaries, response format, and behavioral rules. A well-written system prompt can make a cheaper model outperform an expensive model with a poor prompt.
Add instructions in your system prompt about language handling. You can tell the agent to detect the user's language and respond in that same language, or to always respond in a specific language. Some language detection skills in the marketplace can also help with this.
Most misconfigurations will result in your agent behaving unexpectedly rather than crashing. If a critical setting like the API key is wrong, the agent will show an error status on the dashboard. You can always restart with corrected settings. The agent event log on EZClaws helps diagnose configuration issues.
Your OpenClaw Agent is Waiting for you
Our provisioning engine is standing by to spin up your private OpenClaw instance — dedicated VM, HTTPS endpoint, and full autonomy in under a minute.
