Skip to main content
โšก Calmops

Building AI Agents with MCP: Model Context Protocol Complete Guide

Introduction

The gap between AI assistants that can only talk and AI agents that can actually do has been bridged by protocols that enable secure, structured communication between AI models and the external world. The Model Context Protocol (MCP), pioneered by Anthropic and adopted across the AI industry, provides a standardized way for AI systems to use tools, access data, and take actions.

In 2026, MCP has become the de facto standard for building capable AI agents. Whether you’re building a personal AI assistant or an enterprise automation system, understanding MCP is essential. This comprehensive guide covers everything from protocol fundamentals to advanced implementations.

Understanding Model Context Protocol

What is MCP?

MCP is a specification that defines how AI models can interact with external resources through a standardized interface. It enables AI systems to:

  • Discover and use tools (functions that can be called)
  • Access data sources (databases, APIs, files)
  • Perform actions (send emails, create tickets, update records)
  • Maintain context across interactions

Why MCP Matters

Before MCP, integrating AI with external systems required custom implementations for each integration. MCP provides:

  • Standardization: One protocol for all tool integrations
  • Security: Controlled access to sensitive resources
  • Composability: Tools can be combined in powerful ways
  • Interoperability: Tools built for one MCP-compatible system work with others

Architecture Overview

โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”     MCP      โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚   AI Model  โ”‚โ—„โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ–บโ”‚   MCP Host  โ”‚
โ”‚  ( Claude,  โ”‚              โ”‚ (Claude Code,โ”‚
โ”‚   etc.)     โ”‚              โ”‚  Cursor)    โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”˜              โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”˜
       โ”‚                            โ”‚
       โ”‚                    โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ–ผโ”€โ”€โ”€โ”€โ”€โ”€โ”
       โ”‚                    โ”‚ MCP Server  โ”‚
       โ”‚                    โ”‚  (Your      โ”‚
       โ”‚                    โ”‚   Tools)    โ”‚
       โ”‚                    โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”˜
       โ”‚                           โ”‚
       โ–ผ                           โ–ผ
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”              โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚   User     โ”‚              โ”‚   External  โ”‚
โ”‚   Input    โ”‚              โ”‚   Resources โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜              โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

MCP Components

Tool Definitions

Tools are functions the AI can call:

// Tool definition structure
{
  name: "get_weather",
  description: "Get the current weather for a location",
  inputSchema: {
    type: "object",
    properties: {
      location: {
        type: "string",
        description: "City name or coordinates"
      },
      units: {
        type: "string",
        enum: ["celsius", "fahrenheit"],
        default: "celsius"
      }
    },
    required: ["location"]
  }
}

Resource Definitions

Resources are data the AI can read:

{
  uri: "file:///notes/project-ideas.md",
  name: "Project Ideas",
  description: "List of potential projects to work on",
  mimeType: "text/markdown"
}

Prompt Templates

Prompts are reusable prompt patterns:

{
  name: "summarize_document",
  description: "Summarize a document with key points",
  arguments: [
    {
      name: "document_uri",
      description: "URI of the document to summarize"
    },
    {
      name: "focus_areas",
      description: "Specific areas to focus on"
    }
  ]
}

Building MCP Servers

Basic MCP Server

#!/usr/bin/env python3
"""Simple MCP server example."""
import json
import sys
from typing import Any, Sequence
from mcp.server import Server
from mcp.types import Tool, TextResource
from mcp.server.stdio import stdio_server

class WeatherServer:
    def __init__(self):
        self.server = Server("weather-server")
        self._register_handlers()
    
    def _register_handlers(self):
        @self.server.list_tools()
        async def list_tools() -> list[Tool]:
            return [
                Tool(
                    name="get_weather",
                    description="Get current weather for a location",
                    inputSchema={
                        "type": "object",
                        "properties": {
                            "location": {
                                "type": "string",
                                "description": "City name"
                            }
                        },
                        "required": ["location"]
                    }
                ),
                Tool(
                    name="get_forecast",
                    description="Get weather forecast",
                    inputSchema={
                        "type": "object",
                        "properties": {
                            "location": {"type": "string"},
                            "days": {
                                "type": "integer",
                                "minimum": 1,
                                "maximum": 7,
                                "default": 3
                            }
                        },
                        "required": ["location"]
                    }
                )
            ]
        
        @self.server.call_tool()
        async def call_tool(
            name: str, 
            arguments: dict | None
        ) -> Any:
            if name == "get_weather":
                return await self._get_weather(arguments["location"])
            elif name == "get_forecast":
                return await self._get_forecast(
                    arguments["location"],
                    arguments.get("days", 3)
                )
            else:
                raise ValueError(f"Unknown tool: {name}")
    
    async def _get_weather(self, location: str) -> str:
        # Implementation
        return f"Weather for {location}: 22ยฐC, partly cloudy"
    
    async def _get_forecast(self, location: str, days: int) -> str:
        # Implementation
        return f"{days}-day forecast for {location}..."
    
    async def run(self):
        async with stdio_server() as (read_stream, write_stream):
            await self.server.run(
                read_stream,
                write_stream,
                self.server.create_initialization_options()
            )

if __name__ == "__main__":
    server = WeatherServer()
    server.run()

File System MCP Server

import os
from pathlib import Path
from mcp.server import Server
from mcp.types import Tool, Resource, TextResource

class FileSystemServer:
    def __init__(self, root_path: str):
        self.root = Path(root_path).resolve()
        self.server = Server("filesystem")
        self._register()
    
    def _register(self):
        @self.server.list_tools()
        async def list_tools():
            return [
                Tool(
                    name="read_file",
                    description="Read contents of a file",
                    inputSchema={
                        "type": "object",
                        "properties": {
                            "path": {"type": "string"}
                        },
                        "required": ["path"]
                    }
                ),
                Tool(
                    name="write_file",
                    description="Write content to a file",
                    inputSchema={
                        "type": "object",
                        "properties": {
                            "path": {"type": "string"},
                            "content": {"type": "string"}
                        },
                        "required": ["path", "content"]
                    }
                ),
                Tool(
                    name="list_directory",
                    description="List files in a directory",
                    inputSchema={
                        "type": "object",
                        "properties": {
                            "path": {"type": "string"}
                        }
                    }
                ),
                Tool(
                    name="search_files",
                    description="Search for files by name pattern",
                    inputSchema={
                        "type": "object",
                        "properties": {
                            "pattern": {"type": "string"}
                        }
                    }
                )
            ]
        
        @self.server.call_tool()
        async def call_tool(name, arguments):
            if name == "read_file":
                return self._read_file(arguments["path"])
            elif name == "write_file":
                return self._write_file(arguments["path"], arguments["content"])
            elif name == "list_directory":
                return self._list_directory(arguments.get("path", "."))
            elif name == "search_files":
                return self._search_files(arguments["pattern"])
        
        @self.server.list_resources()
        async def list_resources():
            resources = []
            for path in self.root.rglob("*"):
                if path.is_file():
                    resources.append(
                        Resource(
                            uri=f"file://{path}",
                            name=path.name,
                            mimeType="text/plain"
                        )
                    )
            return resources
        
        @self.server.read_resource()
        async def read_resource(uri):
            path = Path(uri.replace("file://", ""))
            return path.read_text()
    
    def _read_file(self, path):
        full_path = self._safe_path(path)
        return full_path.read_text()
    
    def _write_file(self, path, content):
        full_path = self._safe_path(path)
        full_path.parent.mkdir(parents=True, exist_ok=True)
        full_path.write_text(content)
        return f"Written to {path}"
    
    def _list_directory(self, path):
        full_path = self._safe_path(path)
        items = []
        for item in full_path.iterdir():
            items.append(f"{'[DIR]' if item.is_dir() else '[FILE]'} {item.name}")
        return "\n".join(items)
    
    def _search_files(self, pattern):
        matches = list(self.root.glob(pattern))
        return "\n".join(str(m.relative_to(self.root)) for m in matches)
    
    def _safe_path(self, path):
        full_path = (self.root / path).resolve()
        if not str(full_path).startswith(str(self.root)):
            raise ValueError("Path outside root directory")
        return full_path
    
    async def run(self):
        # Run with stdio
        pass

Database MCP Server

import sqlite3
from contextlib import contextmanager
from mcp.server import Server
from mcp.types import Tool

class DatabaseServer:
    def __init__(self, db_path: str):
        self.db_path = db_path
        self.server = Server("database")
        self._register()
    
    @contextmanager
    def _get_connection(self):
        conn = sqlite3.connect(self.db_path)
        try:
            yield conn
        finally:
            conn.close()
    
    def _register(self):
        @self.server.list_tools()
        async def list_tools():
            return [
                Tool(
                    name="query",
                    description="Execute a read-only SQL query",
                    inputSchema={
                        "type": "object",
                        "properties": {
                            "sql": {"type": "string"},
                            "params": {"type": "object"}
                        },
                        "required": ["sql"]
                    }
                ),
                Tool(
                    name="execute",
                    description="Execute a SQL statement (INSERT, UPDATE, DELETE)",
                    inputSchema={
                        "type": "object",
                        "properties": {
                            "sql": {"type": "string"},
                            "params": {"type": "object"}
                        },
                        "required": ["sql"]
                    }
                ),
                Tool(
                    name="list_tables",
                    description="List all tables in the database",
                    inputSchema={"type": "object", "properties": {}}
                ),
                Tool(
                    name="get_schema",
                    description="Get schema for a specific table",
                    inputSchema={
                        "type": "object",
                        "properties": {
                            "table": {"type": "string"}
                        },
                        "required": ["table"]
                    }
                )
            ]
        
        @self.server.call_tool()
        async def call_tool(name, arguments):
            if name == "query":
                return self._query(arguments["sql"], arguments.get("params"))
            elif name == "execute":
                return self._execute(arguments["sql"], arguments.get("params"))
            elif name == "list_tables":
                return self._list_tables()
            elif name == "get_schema":
                return self._get_schema(arguments["table"])
    
    def _query(self, sql, params=None):
        params = params or {}
        with self._get_connection() as conn:
            conn.row_factory = sqlite3.Row
            cursor = conn.execute(sql, params)
            rows = [dict(row) for row in cursor.fetchall()]
            return rows
    
    def _execute(self, sql, params=None):
        params = params or {}
        with self._get_connection() as conn:
            cursor = conn.execute(sql, params)
            conn.commit()
            return {"rows_affected": cursor.rowcount}
    
    def _list_tables(self):
        return self._query(
            "SELECT name FROM sqlite_master WHERE type='table'"
        )
    
    def _get_schema(self, table):
        return self._query(f"PRAGMA table_info({table})")

MCP in Claude Code

Setting Up MCP

# Install Claude Code
# Then configure MCP in claude_settings.json

# macOS
~/Library/Application\ Support/Claude/settings.json

# Linux
~/.config/Claude/settings.json

Configuration

{
  "mcpServers": {
    "filesystem": {
      "command": "python3",
      "args": ["/path/to/filesystem_server.py"],
      "env": {
        "ROOT_PATH": "/home/user/projects"
      }
    },
    "database": {
      "command": "python3",
      "args": ["/path/to/database_server.py"],
      "env": {
        "DB_PATH": "/home/user/data/app.db"
      }
    },
    "github": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-github"],
      "env": {
        "GITHUB_TOKEN": "${GITHUB_TOKEN}"
      }
    }
  }
}

Using MCP Tools

Once configured, Claude Code can use your tools:

User: What's in my projects folder?

Claude: I'll check the filesystem for you.

[Calls list_directory tool]

The projects folder contains:
- [DIR] website
- [DIR] api
- [DIR] docs
- [FILE] README.md

Would you like me to explore any of these?

Building AI Agents with MCP

Research Agent

class ResearchAgent:
    """AI agent that uses multiple MCP tools for research."""
    
    def __init__(self, llm, mcp_clients):
        self.llm = llm
        self.mcp = mcp_clients
    
    async def research_topic(self, topic):
        # 1. Search for relevant files
        files = await self.mcp.filesystem.search(f"*{topic}*")
        
        # 2. Read relevant documents
        documents = []
        for file in files[:5]:
            content = await self.mcp.filesystem.read(file)
            documents.append({"file": file, "content": content})
        
        # 3. Query knowledge base
        kb_results = await self.mcp.database.query(
            "SELECT * FROM research WHERE topic = ?",
            {"topic": topic}
        )
        
        # 4. Synthesize findings
        prompt = f"""Research topic: {topic}

Relevant files:
{chr(10).join([d['file'] for d in documents])}

Knowledge base:
{kb_results}

Provide:
1. Summary of existing knowledge
2. Key gaps
3. Suggested next steps"""

        return await self.llm.generate(prompt)

Code Review Agent

class CodeReviewAgent:
    def __init__(self, llm, mcp_clients):
        self.llm = llm
        self.mcp = mcp_clients
    
    async def review_pr(self, repo, pr_number):
        # Get PR changes
        changes = await self.mcp.github.get_pr_changes(repo, pr_number)
        
        # Get relevant context from codebase
        context = []
        for file in changes["files"]:
            related = await self.mcp.filesystem.search(f"**/{file['name']}")
            for r in related[:2]:
                context.append(await self.mcp.filesystem.read(r))
        
        # Review code
        prompt = f"""Review these PR changes:

Files changed: {changes['files']}

Code context:
{chr(10).join(context)}

Focus on:
1. Bugs and security issues
2. Performance concerns
3. Code quality
4. Test coverage

Provide specific feedback."""

        review = await self.llm.generate(prompt)
        
        # Post review
        await self.mcp.github.post_review(
            repo, pr_number,
            body=review,
            event="COMMENT"
        )
        
        return review

Personal Assistant Agent

class PersonalAssistant:
    def __init__(self, llm, mcp_clients):
        self.llm = llm
        self.mcp = mcp_clients
    
    async def handle_request(self, user_request):
        # Parse intent
        intent = await self.llm.classify_intent(user_request)
        
        if intent == "get_weather":
            location = await self.llm.extract_location(user_request)
            return await self.mcp.weather.get_weather(location)
        
        elif intent == "add_task":
            task = await self.llm.extract_task(user_request)
            await self.mcp.todo.create_task(task)
            return "Task added!"
        
        elif intent == "find_file":
            pattern = await self.llm.extract_pattern(user_request)
            results = await self.mcp.filesystem.search(pattern)
            return self._format_results(results)
        
        elif intent == "answer_question":
            # Search knowledge base
            docs = await self.mcp.notes.search(user_request)
            
            prompt = f"""Question: {user_request}

Relevant notes:
{chr(10).join([d['content'][:500] for d in docs[:3]])}

Answer the question based on these notes."""

            return await self.llm.generate(prompt)

Advanced Patterns

Tool Chaining

class ToolChain:
    def __init__(self, mcp_client):
        self.mcp = mcp_client
    
    async def research_and_notify(self, topic):
        # Chain: search โ†’ read โ†’ summarize โ†’ notify
        files = await self.mcp.search(f"*{topic}*")
        
        contents = []
        for f in files[:5]:
            contents.append(await self.mcp.read(f))
        
        summary = await self.llm.summarize(contents)
        
        await self.mcp.email.send(
            to="[email protected]",
            subject=f"Research: {topic}",
            body=summary
        )
        
        return summary

Parallel Tool Execution

async def parallel_research(query):
    # Execute multiple searches in parallel
    results = await asyncio.gather(
        mcp.filesystem.search(f"*{query}*"),
        mcp.database.query("SELECT * FROM docs WHERE title LIKE ?", {"query": f"%{query}%"}),
        mcp.web.search(query)
    )
    
    filesystem_results, db_results, web_results = results
    
    return await synthesize_all(query, filesystem_results, db_results, web_results)

Stateful Tools

class ConversationState:
    """Maintain state across tool calls."""
    
    def __init__(self):
        self.conversations = {}
    
    async def process(self, user_id, message):
        if user_id not in self.conversations:
            self.conversations[user_id] = []
        
        # Add to history
        self.conversations[user_id].append(
            {"role": "user", "content": message}
        )
        
        # Process with context
        response = await self.llm.chat(
            messages=self.conversations[user_id]
        )
        
        self.conversations[user_id].append(
            {"role": "assistant", "content": response}
        )
        
        return response

Best Practices

Tool Design

  1. Clear Descriptions: Write descriptions that help the AI understand when to use each tool

  2. Proper Schema: Use JSON Schema for input validation

  3. Error Handling: Return meaningful error messages

  4. Idempotency: Tools should be safe to call multiple times

Security

class SecureTool:
    def __init__(self, allowed_paths):
        self.allowed_paths = allowed_paths
    
    def _validate_path(self, path):
        resolved = Path(path).resolve()
        if not any(resolved.is_relative_to(p) for p in self.allowed_paths):
            raise PermissionError(f"Access denied: {path}")
        return resolved

Performance

  • Batching: Combine multiple operations
  • Caching: Cache frequently accessed data
  • Async: Use async/await for I/O operations

Common MCP Servers

Official MCP Servers

  • GitHub: Repository and PR management
  • Filesystem: Local file access
  • SQLite: Database queries
  • Brave Search: Web search

Community Servers

  • Slack: Team communication
  • Jira: Project management
  • PostgreSQL: SQL database
  • AWS: Cloud resources

Conclusion

MCP has transformed AI from a chat interface into a capable agent that can actually do work. By providing a standardized way to connect AI models to the tools and data they need, MCP enables the creation of powerful automation systems.

Start building MCP servers for your most common workflows. The investment pays off quickly as your AI assistant becomes capable of real work. The future of AI is agentic, and MCP is the protocol that makes it possible.

Resources

Comments