Introduction
The Model Context Protocol (MCP) represents a paradigm shift in how AI applications interact with external systems. Just as HTTP standardized web communication, MCP standardizes how AI models connect to tools, databases, and services. This comprehensive guide covers MCP architecture, implementation, and building production-ready AI applications.
MCP enables AI models to not just generate text, but to take actions, query databases, call APIs, and interact with real-world systems through a standardized interface.
What is Model Context Protocol?
The Problem MCP Solves
Before MCP, integrating AI models with external systems required:
- Custom integrations for each tool or service
- Hand-coded API wrappers
- No standardized way to describe tool capabilities
- Difficult-to-maintain codebases
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Before MCP: NรM Problem โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ โ
โ AI Model โ
โ โ โ
โ โโโโโโบ Database โ
โ โโโโโโบ File System โ
โ โโโโโโบ APIs โ
โ โโโโโโบ GitHub โ
โ โโโโโโบ Slack โ
โ โโโโโโบ ... (custom integration each time) โ
โ โ
โ Each model ร each tool = custom code โ
โ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ After MCP: Standardized โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ โ
โ AI Model โ
โ โ โ
โ โโโโโโบ MCP Server โโโบ Tools โ
โ โ โ
โ โโโโโโบ Database โ
โ โโโโโโบ File System โ
โ โโโโโโบ APIs โ
โ โโโโโโบ ... (any MCP-compatible tool) โ
โ โ
โ Standard protocol, reusable connections โ
โ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
MCP Architecture
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ MCP Architecture โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ โ
โ โโโโโโโโโโโโโโโ MCP Protocol โโโโโโโโโโโโโโโ โ
โ โ Claude โโโโโโโโโโโโโโโโโโโโโโโโบโ MCP Server โ โ
โ โ (LLM) โ โ โ โ
โ โโโโโโโโโโโโโโโ โโโโโโโโฌโโโโโโโ โ
โ โ โ
โ โโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโ โ
โ โ โ โ โ
โ โโโโโโผโโโโโ โโโโโโผโโโโโ โ โ
โ โ Files โ โ Databaseโ โ โ
โ โโโโโโโโโโโ โโโโโโโโโโโ โ โ
โ โ โ
โ Resources Tools Prompts โ โ
โ (data) (actions) (templates) โ โ
โ โ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
MCP Components
1. Resources
Resources are data sources that AI can read:
{
"uri": "file:///project/README.md",
"name": "Project README",
"mimeType": "text/markdown",
"description": "Project documentation"
}
2. Tools
Tools are actions the AI can invoke:
{
"name": "execute_sql",
"description": "Execute a SQL query on the database",
"inputSchema": {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "SQL query to execute"
}
},
"required": ["query"]
}
}
3. Prompts
Prompt templates for common tasks:
{
"name": "analyze_code",
"description": "Analyze code for issues",
"arguments": {
"file_path": "string"
}
}
Building MCP Servers
Python Implementation
from mcp.server import Server
from mcp.server.stdio import stdio_server
from mcp.types import Tool, TextContent
import asyncio
class MyMCPServer:
def __init__(self):
self.server = Server("my-server")
self.setup_handlers()
def setup_handlers(self):
@self.server.list_tools()
async def list_tools():
return [
Tool(
name="read_file",
description="Read a file from the filesystem",
inputSchema={
"type": "object",
"properties": {
"path": {
"type": "string",
"description": "Path to file"
}
},
"required": ["path"]
}
),
Tool(
name="execute_command",
description="Execute a shell command",
inputSchema={
"type": "object",
"properties": {
"command": {
"type": "string",
"description": "Command to execute"
}
},
"required": ["command"]
}
)
]
@self.server.call_tool()
async def call_tool(name: str, arguments: dict):
if name == "read_file":
return await self.read_file(arguments["path"])
elif name == "execute_command":
return await self.execute_command(arguments["command"])
async def read_file(self, path: str):
try:
with open(path, 'r') as f:
content = f.read()
return [TextContent(type="text", text=content)]
except Exception as e:
return [TextContent(type="text", text=f"Error: {str(e)}")]
async def execute_command(self, command: str):
proc = await asyncio.create_subprocess_shell(
command,
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE
)
stdout, stderr = await proc.communicate()
return [TextContent(type="text", text=stdout.decode())]
async def run(self):
async with stdio_server() as streams:
await self.server.run(
streams[0],
streams[1],
self.server.create_initialization_options()
)
# Run server
if __name__ == "__main__":
server = MyMCPServer()
asyncio.run(server.run())
Using with Claude
# client.py - Connect to Claude
from mcp.client import Client
async def main():
async with Client("npx -y @modelcontextprotocol/server-filesystem /path") as client:
# List available tools
tools = await client.list_tools()
print(f"Available tools: {[t.name for t in tools]}")
# Call a tool
result = await client.call_tool("read_file", {"path": "/project/README.md"})
print(result[0].text)
asyncio.run(main())
Use Cases
1. Database Integration
# Database MCP Server
class DatabaseMCPServer:
def __init__(self, connection_string):
self.connection_string = connection_string
@tool(description="Execute SQL query")
async def execute_sql(self, query: str):
# Execute query and return results
pass
@tool(description="List database tables")
async def list_tables(self):
# Return list of tables
pass
@tool(description="Get table schema")
async def get_schema(self, table: str):
# Return table schema
pass
2. GitHub Integration
# GitHub MCP Server
class GitHubMCPServer:
@tool(description="Create a GitHub issue")
async def create_issue(self, owner: str, repo: str, title: str, body: str):
# Create issue via GitHub API
pass
@tool(description="List pull requests")
async def list_prs(self, owner: str, repo: str):
# List PRs
pass
@tool(description="Get code review")
async def get_review(self, owner: str, repo: str, pr: int):
# Get PR review
pass
3. File System Operations
# File System MCP Server
class FileSystemMCPServer:
@tool(description="Search files by pattern")
async def search_files(self, pattern: str, path: str = "."):
# Glob search
pass
@tool(description="Read file contents")
async def read_file(self, path: str):
# Read file
pass
@tool(description="Write file")
async def write_file(self, path: str, content: str):
# Write file
pass
Best Practices
1. Tool Design
# Good tool design principles
# 1. Clear, descriptive names
Tool(name="create_user", description="Create a new user account")
# 2. Comprehensive descriptions
Tool(
name="calculate_shipping",
description="""Calculate shipping cost for an order.
Args:
weight: Package weight in pounds
destination: Two-letter state code
service: One of 'ground', 'express', 'overnight'
Returns:
Shipping cost in dollars
"""
)
# 3. Clear input schemas
Tool(
name="process_order",
inputSchema={
"type": "object",
"properties": {
"order_id": {
"type": "string",
"description": "Unique order identifier (e.g., ORD-12345)"
},
"notify_customer": {
"type": "boolean",
"description": "Send email confirmation",
"default": True
}
},
"required": ["order_id"]
}
)
2. Error Handling
async def safe_tool_execution(tool_name: str, arguments: dict):
try:
result = await execute_tool(tool_name, arguments)
return {"success": True, "data": result}
except ValidationError as e:
return {"success": False, "error": f"Invalid input: {e}"}
except PermissionError as e:
return {"success": False, "error": f"Permission denied: {e}"}
except Exception as e:
return {"success": False, "error": f"Unexpected error: {e}"}
3. Security
# Security best practices
# 1. Validate all inputs
def validate_tool_input(tool_name: str, arguments: dict):
# Validate against schema
pass
# 2. Implement rate limiting
class RateLimitedMCPServer:
def __init__(self):
self.limits = {"default": 100} # requests per minute
async def handle_request(self, tool: str, args: dict):
# Check rate limits
pass
# 3. Audit logging
class AuditedMCPServer:
async def call_tool(self, tool: str, args: dict, user: str):
# Log all tool calls
logger.info(f"User {user} called {tool} with {args}")
# Execute tool
result = await self.execute(tool, args)
logger.info(f"Tool {tool} returned: {result}")
return result
MCP in Production
Deployment
# docker-compose.yml
services:
mcp-server:
build: ./mcp-server
environment:
- DATABASE_URL=${DATABASE_URL}
- API_KEYS=${API_KEYS}
volumes:
- /var/run/docker.sock:/var/run/docker.sock
claude-desktop:
image: anthropic/claude-desktop
environment:
- MCP_SERVER_URL=http://mcp-server:8080
Monitoring
# Metrics collection
class MonitoredMCPServer:
def __init__(self):
self.metrics = {
"tool_calls": 0,
"tool_errors": 0,
"avg_response_time": 0
}
async def call_tool(self, tool: str, args: dict):
start = time.time()
try:
result = await self.execute(tool, args)
self.metrics["tool_calls"] += 1
return result
except Exception as e:
self.metrics["tool_errors"] += 1
raise
finally:
duration = time.time() - start
# Update average response time
Conclusion
MCP represents a fundamental shift in AI application architecture. By standardizing how AI models interact with external systems, MCP enables:
- Reusability: Write once, use with any MCP-compatible AI
- Composability: Combine multiple MCP servers
- Testability: Test tools independently
- Security: Standardized permission model
- Maintainability: Clear interfaces, separated concerns
As AI applications become more sophisticated, MCP will become the standard for connecting AI models to the tools and data they need.
Comments