Microservices vs Monolith: Choosing the Right Architecture
Choosing between microservices and monolithic architectures is a critical decision affecting scalability, complexity, and cost. This guide provides a decision framework.
Architecture Comparison
Monolithic Architecture
Monolithic applications are single, integrated units where all components share the same database and run in the same process.
Advantages:
- Simpler to build initially - Single codebase, easier development
- Easier to test end-to-end - All components available locally
- Better performance - No network latency between components
- Simpler deployment - Single artifact to deploy
- Easier debugging - Stack traces show all components
Disadvantages:
- Tight coupling - Changes to one component can break others
- Scaling limitations - Must scale entire application, not individual services
- Technology lock-in - All components use same stack
- Deployment risk - Small bug can bring down entire system
- Large team coordination - Multiple teams working on same codebase
Example Monolith:
# Single application
app/
โโโ auth/
โ โโโ models.py
โ โโโ routes.py
โ โโโ services.py
โโโ orders/
โ โโโ models.py
โ โโโ routes.py
โ โโโ services.py
โโโ payments/
โ โโโ models.py
โ โโโ routes.py
โ โโโ services.py
โโโ inventory/
โ โโโ models.py
โ โโโ routes.py
โ โโโ services.py
โโโ shared/
โโโ database.py
โโโ logging.py
โโโ config.py
Monolithic API:
from fastapi import FastAPI
app = FastAPI()
# All routes in single app
@app.post("/auth/login")
async def login(credentials: LoginRequest):
# Auth logic
pass
@app.post("/orders/create")
async def create_order(order: Order):
# Orders logic
pass
@app.post("/payments/process")
async def process_payment(payment: Payment):
# Payments logic
pass
@app.post("/inventory/update")
async def update_inventory(item: InventoryItem):
# Inventory logic
pass
Microservices Architecture
Microservices split the application into loosely coupled, independently deployable services.
Advantages:
- Independent scaling - Scale only services that need it
- Technology flexibility - Each service can use different tech
- Fault isolation - One service failing doesn’t crash others
- Easier updates - Deploy individual services without downtime
- Team autonomy - Teams own specific services
Disadvantages:
- Complexity - Multiple services to manage and monitor
- Network latency - Inter-service communication over network
- Data consistency - Distributed transactions are hard
- Operational overhead - Requires advanced DevOps
- Harder debugging - Traces span multiple services
Example Microservices:
services/
โโโ auth-service/ # Port 8001
โ โโโ main.py
โ โโโ models.py
โ โโโ routes.py
โ โโโ Dockerfile
โโโ orders-service/ # Port 8002
โ โโโ main.py
โ โโโ models.py
โ โโโ routes.py
โ โโโ Dockerfile
โโโ payments-service/ # Port 8003
โ โโโ main.py
โ โโโ models.py
โ โโโ routes.py
โ โโโ Dockerfile
โโโ inventory-service/ # Port 8004
โ โโโ main.py
โ โโโ models.py
โ โโโ routes.py
โ โโโ Dockerfile
โโโ api-gateway/
โโโ main.py
โโโ routes.py
Microservices Communication:
# API Gateway
from fastapi import FastAPI
import httpx
gateway = FastAPI()
async def call_auth_service(endpoint: str):
async with httpx.AsyncClient() as client:
response = await client.get(f"http://auth-service:8001{endpoint}")
return response.json()
@gateway.post("/auth/login")
async def login(credentials: LoginRequest):
# Forward to auth service
return await call_auth_service("/login")
# Async message queue for events
import aio_pika
async def publish_order_created(order_id: str):
"""Publish event about new order"""
connection = await aio_pika.connect_robust("amqp://guest:guest@rabbitmq/")
channel = await connection.channel()
exchange = await channel.declare_exchange('orders', aio_pika.ExchangeType.TOPIC)
message = aio_pika.Message(
body=json.dumps({'order_id': order_id}).encode()
)
await exchange.publish(message, routing_key='order.created')
Decision Framework
Decision Tree
START
โ
โโ Team size < 5?
โ โโ YES โ Monolith (easier to coordinate)
โ
โโ Expected throughput > 1000 req/sec?
โ โโ YES โ Microservices (independent scaling)
โ
โโ Need different tech stacks per component?
โ โโ YES โ Microservices (flexibility)
โ
โโ High uptime requirement (99.9%+)?
โ โโ YES โ Microservices (fault isolation)
โ
โโ Budget for DevOps/infrastructure?
โ โโ NO โ Monolith (simpler operations)
โ
โโ Go with Monolith or Microservices
Cost Analysis
class ArchitectureCostCalculator:
"""Calculate TCO for different architectures"""
def calculate_monolith_costs(self, team_size: int, months: int):
"""Calculate monolith costs"""
# Personnel costs
developers = team_size
devops = max(1, team_size // 10) # 1 DevOps per 10 devs
salary_per_person = 120000 # USD/year
# Infrastructure costs (single deployment)
servers = 3 # 1 prod + 1 staging + 1 dev
server_cost_monthly = 500
# Database
db_cost_monthly = 300
# Monitoring/logging
monitoring_cost_monthly = 200
total_monthly = (
((developers + devops) * salary_per_person / 12) +
(servers * server_cost_monthly) +
db_cost_monthly +
monitoring_cost_monthly
)
return {
'personnel': (developers + devops) * salary_per_person / 12,
'infrastructure': servers * server_cost_monthly + db_cost_monthly + monitoring_cost_monthly,
'total_monthly': total_monthly,
'total_project': total_monthly * months
}
def calculate_microservices_costs(self, num_services: int, team_size: int, months: int):
"""Calculate microservices costs"""
# Personnel costs (more developers, more DevOps)
developers = team_size * 1.2 # 20% more developers needed
devops = max(2, team_size // 5) # 1 DevOps per 5 devs (more needed)
salary_per_person = 120000
# Infrastructure costs (multiple deployments, K8s)
servers_per_service = 3
total_servers = num_services * servers_per_service
server_cost_monthly = 500
# Kubernetes cluster
k8s_cost_monthly = 800
# Databases (one per service)
db_cost_monthly = 300 * num_services
# API Gateway
gateway_cost = 200
# Monitoring/logging (more complex)
monitoring_cost_monthly = 500
# Service mesh (Istio)
service_mesh_cost = 300
total_monthly = (
((developers + devops) * salary_per_person / 12) +
(total_servers * server_cost_monthly) +
k8s_cost_monthly +
db_cost_monthly +
gateway_cost +
monitoring_cost_monthly +
service_mesh_cost
)
return {
'personnel': (developers + devops) * salary_per_person / 12,
'infrastructure': (total_servers * server_cost_monthly + k8s_cost_monthly +
db_cost_monthly + gateway_cost + monitoring_cost_monthly + service_mesh_cost),
'total_monthly': total_monthly,
'total_project': total_monthly * months
}
# Usage
calc = ArchitectureCostCalculator()
# Small team, short project
monolith = calc.calculate_monolith_costs(team_size=5, months=6)
print(f"Monolith 6-month cost: ${monolith['total_project']:,.0f}")
# Monolith 6-month cost: $450,000
# Microservices with 5 services
microservices = calc.calculate_microservices_costs(num_services=5, team_size=5, months=6)
print(f"Microservices 6-month cost: ${microservices['total_project']:,.0f}")
# Microservices 6-month cost: $720,000 (60% more expensive)
Migration Strategy
Strangler Fig Pattern
# Gradually replace monolith with microservices
from fastapi import FastAPI, Request
import httpx
app = FastAPI()
class ServiceRouter:
"""Route requests to old monolith or new services"""
def __init__(self):
self.monolith_url = "http://old-monolith:8000"
self.service_registry = {
# New services take precedence
"/orders": "http://orders-service:8002",
"/payments": "http://payments-service:8003",
"/inventory": "http://inventory-service:8004",
# Others still go to monolith
}
async def route_request(self, request: Request):
"""Route to new service or fall back to monolith"""
# Check if new service exists
for path_prefix, service_url in self.service_registry.items():
if request.url.path.startswith(path_prefix):
# Use new service
async with httpx.AsyncClient() as client:
response = await client.request(
method=request.method,
url=f"{service_url}{request.url.path}",
content=await request.body()
)
return response
# Fall back to monolith
async with httpx.AsyncClient() as client:
response = await client.request(
method=request.method,
url=f"{self.monolith_url}{request.url.path}",
content=await request.body()
)
return response
router = ServiceRouter()
@app.api_route("/{path_name:path}", methods=["GET", "POST", "PUT", "DELETE"])
async def catch_all(request: Request):
"""Route all requests through router"""
return await router.route_request(request)
Database Decoupling
# Monolith still has single database
# New services get their own database
class OrdersService:
"""New orders microservice with its own database"""
def __init__(self):
# New service has separate database
self.db = PostgresConnection("postgresql://user:pass@orders-db:5432/orders")
async def create_order(self, order_data: dict):
"""Create order in new service database"""
# Insert into new service database
order_id = await self.db.execute(
"INSERT INTO orders (customer_id, total) VALUES (?, ?) RETURNING id",
(order_data['customer_id'], order_data['total'])
)
# Also write to monolith database (temporarily, for consistency)
# This allows gradual migration
monolith_db = MonolithDBConnection()
await monolith_db.execute(
"INSERT INTO orders (id, customer_id, total) VALUES (?, ?, ?)",
(order_id, order_data['customer_id'], order_data['total'])
)
return order_id
Performance Comparison
Latency (100 concurrent users):
- Monolith: 50ms avg
- Microservices: 150ms avg (adds network overhead)
Throughput:
- Monolith: 1000 req/sec (single server limit)
- Microservices: 10,000+ req/sec (scale each service)
Deployment time:
- Monolith: 5 minutes (rebuild + deploy everything)
- Microservices: 2 minutes (deploy changed service only)
When to Use Each
Use Monolith When
- โ Team < 10 people
- โ Single product/feature
- โ <1000 requests/second
- โ Simple domain logic
- โ Early stage startup
- โ Budget constrained
Use Microservices When
- โ Team > 20 people
- โ Multiple independent product features
- โ >1000 requests/second
- โ Complex domain logic
- โ Need independent scaling
- โ Different teams own different services
Glossary
- Monolith: Single integrated application
- Microservices: Independent, loosely coupled services
- API Gateway: Single entry point routing to services
- Service Mesh: Infrastructure layer managing service-to-service communication
- Strangler Fig: Gradual replacement pattern
Comments