MCP Prompts - Deep Dive
MCP Prompts - Deep Dive
What Are Prompts?
In MCP (Model Context Protocol), Prompts are reusable prompt templates that standardize complex multi-step interactions.
Three Pillars of MCP
┌─────────────────────────────────────────────────┐
│ MCP Protocol │
├─────────────────────────────────────────────────┤
│ │
│ 1. Tools (Actions) │
│ - navigate, click, type, screenshot │
│ - Execute, validate │
│ - LLM calls these directly │
│ │
│ 2. Resources (Data Sources) │
│ - screenshots, forms, tests, patterns │
│ - LLM reads these │
│ - Read-only access │
│ │
│ 3. Prompts (Templates) ← THIS FOCUS │
│ - Reusable prompt patterns │
│ - Multi-step workflows │
│ - Standardized interactions │
│ - LLM calls these indirectly │
│ │
└─────────────────────────────────────────────────┘Why Prompts Matter
Without Prompts
Every interaction:
LLM → "What do you want?"
User → "Screenshot dashboard"
LLM → "Okay, navigating..."
LLM → "Screenshotting..."
LLM → "Here it is"
User → "What numbers?"
LLM → "Analyzing..."Problem: Repetitive, inconsistent, no best practices
With Prompts
User → "Analyze dashboard"
LLM → Calls "analyze_dashboard" prompt
LLM → Pre-built template runs:
1. Navigate to dashboard URL
2. Verify "Dashboard" present
3. Capture screenshot
4. Extract metrics via OCR
5. Return analysis
LLM → "Dashboard shows 1,234 users, 45% growth"Benefit: Consistent, tested, reusable
Prompt Schema
Basic Structure
{
"name": "analyze_dashboard",
"description": "Analyze dashboard metrics and return key insights",
"arguments": [
{
"name": "dashboard_url",
"description": "URL of the dashboard to analyze",
"required": true
},
{
"name": "metrics",
"description": "Comma-separated list of metrics to extract",
"required": false
}
]
}Complete Schema (MCP Standard)
{
"name": "prompt_name",
"description": "Human-readable description",
"arguments": [
{
"name": "arg_name",
"description": "What this argument does",
"required": boolean
}
],
"metadata": {
"author": "Seneca",
"version": "1.0",
"tags": ["vision", "analysis", "dashboard"],
"category": "automation"
}
}Prompt Template Types
Type 1: Sequential Workflows
Prompts that execute steps in order.
Example: Login Flow
Prompt: "login_to_service"
Steps:
1. Navigate to login URL
2. Verify login form present
3. Fill in credentials
4. Click submit
5. Verify success message
6. Screenshot for evidenceImplementation:
{
"name": "login_to_service",
"description": "Complete login workflow with verification",
"arguments": [
{"name": "url", "required": true},
{"name": "username", "required": true},
{"name": "password", "required": false}
],
"template": [
"Navigate to {{url}}",
"Verify 'Login' is visible",
"Fill form: username={{username}}, password={{password}}",
"Click submit button",
"Verify 'Welcome' or 'Success' is visible",
"Capture screenshot"
]
}Type 2: Analysis Prompts
Prompts that analyze data and return insights.
Example: Dashboard Analysis
Prompt: "analyze_dashboard"
Analysis:
1. Navigate to dashboard
2. Screenshot
3. OCR extract all numbers
4. Calculate statistics (mean, median, trend)
5. Return formatted reportImplementation:
{
"name": "analyze_dashboard",
"description": "Extract and analyze dashboard metrics",
"arguments": [
{"name": "url", "required": true},
{"name": "metrics", "required": false}
],
"template": [
"Navigate to {{url}}",
"Verify dashboard loaded",
"Capture screenshot",
"Extract numbers via OCR",
"Calculate: mean, median, growth_rate",
"Return formatted report"
]
}Type 3: Conditional Prompts
Prompts that branch based on conditions.
Example: Check Service Health
Prompt: "check_service_health"
Logic:
IF error message visible → Report down
ELSE → Check metrics
IF metrics < threshold → Report warning
ELSE → Report healthyImplementation:
{
"name": "check_service_health",
"description": "Check if service is healthy and report status",
"arguments": [
{"name": "url", "required": true},
{"name": "threshold", "required": false}
],
"template": [
"Navigate to {{url}}",
"IF 'Error' or 'Down' visible →",
" Return: Status=DOWN, Reason=Error message",
"ELSE →",
" Check health metrics",
" IF metrics < {{threshold}} →",
" Return: Status=WARN, Metrics=[...]",
" ELSE →",
" Return: Status=HEALTHY, Metrics=[...]"
]
}Type 4: Iterative Prompts
Prompts that repeat until condition met.
Example: Wait for Element
Prompt: "wait_for_element"
Logic:
WHILE element not visible:
1. Screenshot
2. Check for text
3. IF found → Break
4. ELSE → Wait 1s, Repeat
5. IF timeout → Return errorImplementation:
{
"name": "wait_for_element",
"description": "Wait up to 30 seconds for element to appear",
"arguments": [
{"name": "element_text", "required": true},
{"name": "timeout", "required": false, "default": 30}
],
"template": [
"REPEAT (up to {{timeout}} seconds):",
" Screenshot",
" IF '{{element_text}}' visible → Break",
" ELSE → Wait 1s",
" Continue",
"IF never found → Return: Element not found (timeout)"
]
}Prompt Composition
Chaining Prompts
Prompts can call other prompts:
Main Prompt: "full_e2e_test"
↓
Calls: "login_to_service"
↓
Calls: "verify_user_dashboard"
↓
Calls: "create_test_data"
↓
Calls: "run_test_scenario"
↓
Calls: "cleanup_test_data"Benefits:
Fallback Strategies
Prompts can specify what to do on failure:
{
"name": "robust_form_submit",
"description": "Submit form with retry on failure",
"arguments": [{"name": "url", "required": true}],
"template": [
"Navigate to {{url}}",
"Fill form",
"Click submit",
"Verify success",
"IF success → Return success",
"ELSE →",
" Try alternative submit button",
" Click alternative",
" Verify success",
" IF still failed → Return: Form submission failed"
]
}MCP Protocol Methods for Prompts
prompts/list
List all available prompts.
Request:
{
"jsonrpc": "2.0",
"id": 1,
"method": "prompts/list"
}Response:
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"prompts": [
{
"name": "analyze_dashboard",
"description": "Extract and analyze dashboard metrics",
"arguments": [
{"name": "url", "required": true},
{"name": "metrics", "required": false}
]
},
...
]
}
}prompts/get
Get a specific prompt's definition.
Request:
{
"jsonrpc": "2.0",
"id": 2,
"method": "prompts/get",
"params": {
"name": "analyze_dashboard"
}
}Response:
{
"jsonrpc": "2.0",
"id": 2,
"result": {
"name": "analyze_dashboard",
"description": "Extract and analyze dashboard metrics",
"arguments": [
{"name": "url", "required": true},
{"name": "metrics", "required": false}
],
"template": [
"Navigate to {{url}}",
"Verify dashboard loaded",
"Capture screenshot",
"Extract numbers via OCR",
"Calculate: mean, median, growth_rate",
"Return formatted report"
]
}
}Prompt Templates for Vision + Code MCP
Prompt 1: Complete Dashboard Analysis
Combines Vision and Code capabilities.
{
"name": "complete_dashboard_analysis",
"description": "Navigate to dashboard, capture screenshot, extract metrics, calculate statistics",
"arguments": [
{"name": "url", "required": true},
{"name": "metrics_to_extract", "required": false}
],
"template": [
"Step 1: Navigate",
" Call: vision_mcp.navigate(url={{url}})",
"Step 2: Verify",
" Call: vision_mcp.verify(text='Dashboard')",
"Step 3: Capture",
" Call: vision_mcp.screenshot()",
"Step 4: Extract",
" Use OCR to extract numbers from screenshot",
"Step 5: Analyze",
" Call: code_mcp.execute(code=calculate_statistics(extracted_numbers))",
"Step 6: Return",
" Format results: Metrics: [...], Stats: {mean, median, growth}"
]
}Prompt 2: Form Testing Workflow
Complete test flow for form validation.
{
"name": "form_test_workflow",
"description": "Test form: navigate, fill, submit, verify",
"arguments": [
{"name": "form_url", "required": true},
{"name": "test_data", "required": true}
],
"template": [
"Step 1: Setup",
" Log: Starting form test for {{form_url}}",
"Step 2: Navigate",
" Call: vision_mcp.navigate(url={{form_url}})",
"Step 3: Verify form present",
" Call: vision_mcp.verify(text='Submit')",
"Step 4: Fill form",
" Parse: test_data JSON",
" For each field in test_data:",
" Call: vision_mcp.fill_form(url={{form_url}}, fields={{field.value}})",
"Step 5: Submit",
" Call: vision_mcp.find_element(text='Submit')",
" Call: vision_mcp.click_at(x=coords.x, y=coords.y)",
"Step 6: Verify result",
" Call: vision_mcp.verify(text='Success' OR text='Thank you')",
"Step 7: Screenshot evidence",
" Call: vision_mcp.screenshot()",
"Step 8: Report",
" Return: PASS if verified, FAIL otherwise"
]
}Prompt 3: Multi-Site Comparison
Compare metrics across multiple sites.
{
"name": "compare_multiple_sites",
"description": "Navigate to multiple URLs, extract metrics, compare results",
"arguments": [
{"name": "urls", "required": true},
{"name": "metric_name", "required": false}
],
"template": [
"Initialize: results = []",
"FOR EACH url in urls:",
" Call: vision_mcp.navigate(url={{url}})",
" Call: vision_mcp.screenshot()",
" Call: vision_mcp.verify(text={{metric_name}})",
" IF verified:",
" Call: vision_mcp.ocr_extract_numbers()",
" Add to results: {url, metric: value}",
" ELSE:",
" Add to results: {url, metric: NOT_FOUND}",
"Call: code_mcp.execute(code=compare_results(results))",
"Return: Comparison report with best/worst performers"
]
}Prompt Development Best Practices
1. Clear Names
❌ {"name": "d1", "description": "Do dashboard thing"}
✅ {"name": "analyze_dashboard", "description": "Analyze dashboard metrics and return key insights"}2. Explicit Arguments
❌ {"arguments": [{"name": "u"}]}
✅ {"arguments": [{"name": "dashboard_url", "description": "Full URL of dashboard to analyze"}]}3. Step-by-Step Templates
❌ {"template": "Navigate, screenshot, analyze"}
✅ {"template": [
"Step 1: Navigate to {{url}}",
"Step 2: Verify page loaded",
"Step 3: Capture screenshot",
"Step 4: Extract metrics via OCR",
"Step 5: Calculate statistics",
"Step 6: Return formatted report"
]}4. Error Handling
❌ {"template": "Try to navigate, if fail return error"}
✅ {"template": [
"Try: vision_mcp.navigate(url={{url}})",
"IF fail:",
" Return: Cannot navigate to {{url}}",
"ELSE:",
" Continue to next step"
]}5. Logging
✅ {"template": [
"Log: Starting dashboard analysis for {{url}}",
"Step 1: Navigate",
" Log: Navigating to {{url}}",
" Call: vision_mcp.navigate(url={{url}})",
" Log: Navigation complete",
...
]}Prompt Storage
File Structure
~/.openclaw/workspace/prompts/
├── vision/
│ ├── analyze_dashboard.json
│ ├── form_test_workflow.json
│ └── compare_sites.json
├── code/
│ ├── analyze_data.json
│ └── calculate_statistics.json
└── combined/
├── full_dashboard_analysis.json
└── e2e_test_suite.jsonFormat
{
"name": "prompt_name",
"version": "1.0",
"description": "Human-readable description",
"category": "vision|code|combined",
"tags": ["tag1", "tag2"],
"arguments": [
{
"name": "arg_name",
"description": "What this argument does",
"required": true|false,
"type": "string|number|object"
}
],
"template": [
"Step or condition description",
"Tool call or nested prompt",
"Return format"
],
"metadata": {
"author": "Seneca",
"created": "2026-02-04",
"dependencies": ["vision-mcp", "code-mcp"],
"tested": true
}
}Implementing Prompts in MCP Server
Code Structure
# Load prompts from disk
PROMPTS_DIR = WORKSPACE / "prompts"
def load_prompts() -> List[Dict]:
"""Load all prompt JSON files."""
prompts = []
for category_dir in PROMPTS_DIR.iterdir():
for prompt_file in category_dir.glob("*.json"):
with open(prompt_file, 'r') as f:
prompt_data = json.load(f)
prompts.append(prompt_data)
return prompts
def get_prompt(name: str) -> Optional[Dict]:
"""Get a specific prompt by name."""
for category_dir in PROMPTS_DIR.iterdir():
for prompt_file in category_dir.glob("*.json"):
with open(prompt_file, 'r') as f:
prompt_data = json.load(f)
if prompt_data.get("name") == name:
return prompt_data
return NoneMethod: prompts/list
elif method == "prompts/list":
prompts = load_prompts()
response["result"] = {
"prompts": prompts
}Method: prompts/get
elif method == "prompts/get":
params = request.get("params", {})
prompt_name = params.get("name")
prompt = get_prompt(prompt_name)
if prompt:
response["result"] = prompt
else:
response["error"] = {
"code": -32602,
"message": f"Prompt not found: {prompt_name}"
}Example: Full Prompt Implementation
Prompt: analyze_dashboard
{
"name": "analyze_dashboard",
"version": "1.0",
"description": "Navigate to dashboard, capture screenshot, extract metrics, calculate statistics",
"category": "combined",
"tags": ["vision", "code", "analysis", "dashboard"],
"arguments": [
{
"name": "dashboard_url",
"description": "Full URL of the dashboard to analyze",
"required": true,
"type": "string"
},
{
"name": "metrics",
"description": "Comma-separated list of specific metrics to extract (optional)",
"required": false,
"type": "string"
},
{
"name": "include_screenshot",
"description": "Whether to include screenshot in results",
"required": false,
"type": "boolean"
}
],
"template": [
{
"step": 1,
"action": "log",
"message": "Starting dashboard analysis for {{dashboard_url}}"
},
{
"step": 2,
"action": "tool_call",
"tool": "vision_mcp.navigate",
"arguments": {
"url": "{{dashboard_url}}"
}
},
{
"step": 3,
"action": "verification",
"method": "tools/call",
"tool": "vision_mcp.verify",
"arguments": {
"text": "Dashboard"
}
},
{
"step": 4,
"action": "tool_call",
"tool": "vision_mcp.screenshot",
"arguments": {}
},
{
"step": 5,
"action": "log",
"message": "Extracting metrics from screenshot..."
},
{
"step": 6,
"action": "tool_call",
"tool": "vision_mcp.ocr_extract_numbers",
"arguments": {}
},
{
"step": 7,
"action": "tool_call",
"tool": "code_mcp.execute",
"arguments": {
"code": "data = extracted_numbers\nstats = calculate_statistics(data)\nprint(json.dumps(stats))"
}
},
{
"step": 8,
"action": "return",
"format": {
"dashboard_url": "{{dashboard_url}}",
"metrics_extracted": "{{extracted_numbers}}",
"statistics": "{{code_result}}",
"screenshot_path": "{{screenshot_result}}"
}
}
],
"metadata": {
"author": "Seneca",
"created": "2026-02-04",
"dependencies": ["vision-mcp", "code-mcp"],
"tested": false
}
}Prompt Execution Engine
Template Variables
Replace {{variable}} with actual values:
def render_template(template: str, variables: Dict) -> str:
"""Render template with variable substitution."""
rendered = template
for key, value in variables.items():
rendered = rendered.replace(f"{{{{{key}}}}}", str(value))
return renderedConditional Execution
Handle IF/ELSE/REPEAT logic:
def execute_template_step(step: Dict, context: Dict) -> str:
"""Execute a single template step."""
action = step.get("action")
if action == "tool_call":
tool = step.get("tool")
args = render_template(step.get("arguments"), context)
return call_tool(tool, args)
elif action == "prompt_call":
prompt = step.get("prompt")
args = render_template(step.get("arguments"), context)
return call_prompt(prompt, args)
elif action == "verification":
# Add verification logic here
pass
elif action == "log":
message = render_template(step.get("message"), context)
log(message)
return "logged"Advanced Features
Prompt Variables Capture
Capture outputs from previous steps:
{
"template": [
{
"step": 1,
"action": "tool_call",
"tool": "vision_mcp.screenshot",
"arguments": {},
"save_as": "screenshot_result"
},
{
"step": 2,
"action": "tool_call",
"tool": "vision_mcp.ocr_extract",
"arguments": {
"image": "{{screenshot_result.path}}"
},
"save_as": "extracted_numbers"
},
{
"step": 3,
"action": "return",
"format": "{{screenshot_result}}: {{extracted_numbers}}"
}
]
}Error Recovery
Define fallback strategies:
{
"template": [
{
"step": 1,
"action": "tool_call",
"tool": "vision_mcp.navigate",
"on_failure": {
"retry": 3,
"backoff": "exponential",
"then": "skip_to_step": 5
}
}
}
]Parallel Execution
Run multiple steps in parallel:
{
"template": [
{
"step": 1,
"action": "parallel",
"tasks": [
{"tool": "vision_mcp.navigate", "url": "{{url1}}"},
{"tool": "vision_mcp.navigate", "url": "{{url2}}"},
{"tool": "vision_mcp.navigate", "url": "{{url3}}"}
]
}
]
}Use Cases
1. Automated Testing
Prompt: e2e_test_suite
{
"name": "e2e_test_suite",
"description": "Run full end-to-end test suite",
"template": [
"Call: login_to_service",
"Call: verify_user_dashboard",
"Call: run_scenario_tests",
"Call: generate_test_report",
"Return: Test results summary"
]
}2. Data Extraction
Prompt: extract_site_data
{
"name": "extract_site_data",
"description": "Extract structured data from website",
"template": [
"Navigate to site",
"Screenshot",
"OCR extract all text",
"Code: Parse into structured format (JSON)",
"Return: Parsed data"
]
}3. Monitoring
Prompt: health_check_multiple
{
"name": "health_check_multiple",
"description": "Check health of multiple services",
"template": [
"FOR EACH service in services:",
" Navigate to service URL",
" Check for error messages",
" IF error → Log: DOWN, ELSE → Log: UP",
"Generate: Health status report"
]
}Testing Prompts
Unit Test
Test individual prompts:
def test_prompt(prompt_name: str):
"""Test a single prompt."""
prompt = get_prompt(prompt_name)
test_args = {"dashboard_url": "http://test.local"}
# Execute prompt
result = execute_prompt(prompt, test_args)
# Verify result structure
assert "metrics" in result
assert "statistics" in result
print(f"✅ {prompt_name} test passed")Integration Test
Test prompt chains:
def test_prompt_chain():
"""Test prompt calling other prompts."""
result = execute_prompt("e2e_test_suite", {...})
# Verify all sub-prompts were called
assert "login_result" in result
assert "test_results" in result
assert "report" in result
print("✅ Prompt chain test passed")Documentation
Prompt Catalog
Maintain a catalog of available prompts:
# Available Prompts
## Vision Prompts
- `analyze_dashboard` - Analyze dashboard metrics
- `form_test_workflow` - Test form submission
- `compare_sites` - Compare multiple sites
## Code Prompts
- `analyze_data` - Statistical analysis
- `calculate_metrics` - Compute custom metrics
## Combined Prompts
- `e2e_test_suite` - Full test workflow
- `health_check_multiple` - Multi-service monitoring
See `prompts/` [REDACTED]y for full definitions.Key Insight: Prompts complete the MCP trio (Tools, Resources, Prompts). While Tools do actions and Resources provide data, Prompts orchestrate complex multi-step workflows with reusability, testing, and standardization.
Next: Implement prompts/list and prompts/get methods in vision-agent-mcp server.