MCP Tools
Extend your LLM's capabilities with built-in tools Web search, code execution, and image analysis - automatically available when using compatible models.
Overview
The Korad.AI platform supports Model Context Protocol (MCP) tools. Tools are automatically available when using compatible models, allowing your LLM to:
- Search the web for real-time information
- Execute code in a sandboxed environment
- Analyze images and extract information
Available Tools
| Tool | Description | Cost |
|---|---|---|
web_search | Real-time web search via Brave Search API | $0.01/query |
web_fetch | Fetch and summarize URLs | $0.02/fetch |
code_sandbox | Execute Python code in sandbox | $0.05/execution |
image_analysis | Analyze images with vision models | $0.01/image |
Tool Usage
Tools work automatically - no additional configuration needed. When you make a request that requires information the model doesn't have, it will automatically use the appropriate tool.
Example: Web Search
from openai import OpenAI
client = OpenAI(
base_url="http://localhost:8084/v1",
api_key="sk-bf-YOUR_VIRTUAL_KEY"
)
# The model will automatically use web_search
response = client.chat.completions.create(
model="anthropic/claude-sonnet-4-5-20250929",
messages=[
{
"role": "user",
"content": "What's the current price of Bitcoin? Search the web."
}
],
)
# The platform will:
# 1. Detect the need for web_search
# 2. Call the MCP tool
# 3. Include the tool results in the prompt
# 4. Generate the final response
# 5. Bill for tool usage + generation
Example: Code Execution
response = client.chat.completions.create(
model="anthropic/claude-sonnet-4-5-20250929",
messages=[
{
"role": "user",
"content": "Calculate the first 100 digits of pi using Monte Carlo simulation."
}
],
)
# The model will:
# 1. Recognize the need for code execution
# 2. Write Python code
# 3. Execute it in the sandbox
# 4. Return the results
Tool Configuration
Tools are configured in config.dev.json:
{
"mcp": {
"servers": {
"korad-tools": {
"command": "python",
"args": ["./servers/korad-tools/server.py"],
"env": {
"BRAVE_SEARCH_KEY": "${env.BRAVE_SEARCH_KEY}",
"BIFROST_URL": "${env.BIFROST_URL}",
"DB_PATH": "${env.DB_PATH}"
}
}
}
}
}
Billing
Tool usage is billed separately from LLM generation:
- Tool costs appear in the
X-Korad-Actual-Costheader - Tool usage is logged in the usage database
- Tool costs are included in your monthly bill
Cost Example
X-Korad-Theoretical-Cost: $0.015000 # LLM cost
X-Korad-Tool-Cost: $0.010000 # Tool cost
X-Korad-Actual-Cost: $0.025000 # Total actual cost
X-Korad-Billed-Amount: $0.037500 # With 1.5x margin
Custom Tools
You can add custom MCP tools to your deployment:
- Create a new MCP server in
servers/your-tool/ - Add configuration to
config.dev.json - Restart the stack
{
"mcp": {
"servers": {
"your-tool": {
"command": "node",
"args": ["./servers/your-tool/index.js"]
}
}
}
}
Tool Availability
Tools are available for all models that support function calling:
- ✅ Claude 3.5+ (anthropic)
- ✅ GPT-4o, GPT-4 Turbo (openai)
- ✅ Gemini 2.0+ (google-gemini)
- ⚠️ DeepSeek (limited support)
Troubleshooting
Tools Not Available
If tools are not being used:
- Check that the MCP server is running:
docker-compose logs korad-tools
-
Verify your API key for the tool (e.g.,
BRAVE_SEARCH_KEY) -
Ensure your model supports function calling
Tool Errors
Tool errors are included in the response:
{
"error": {
"message": "Tool execution failed: web_search - rate limit exceeded",
"type": "tool_error"
}
}
Extend your LLM's capabilities with MCP tools.