Introduction
Serverless computing has transformed how developers build and deploy applications. By eliminating server management and scaling automatically, teams can focus on business logic. This guide covers serverless architecture patterns, implementation strategies, and best practices for building production-ready serverless applications.
Serverless computing is a cloud computing execution model in which the cloud provider runs the server, and dynamically manages the allocation of machine resources.
Core Concepts
Serverless vs Traditional
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Traditional vs Serverless โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ โ
โ Traditional: โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ Always running server โ โ
โ โ Fixed capacity โ โ
โ โ Pay for idle time โ โ
โ โ Manage scaling โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ
โ Serverless: โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ Event-triggered functions โ โ
โ โ Auto-scaling to zero โ โ
โ โ Pay per invocation โ โ
โ โ No server management โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
AWS Lambda
Function Structure
import json
import boto3
def lambda_handler(event, context):
"""AWS Lambda handler function."""
# Get request data
http_method = event.get('httpMethod')
path = event.get('path')
# Process request
if http_method == 'GET' and path == '/users':
return get_users(event)
elif http_method == 'POST' and path == '/users':
return create_user(event)
return {
'statusCode': 404,
'body': json.dumps({'error': 'Not found'})
}
def get_users(event):
# Query parameters
limit = int(event.get('queryStringParameters', {}).get('limit', 10))
# Fetch users
users = fetch_users_from_db(limit=limit)
return {
'statusCode': 200,
'headers': {'Content-Type': 'application/json'},
'body': json.dumps(users)
}
def create_user(event):
body = json.loads(event.get('body', '{}'))
# Validate
if 'email' not in body:
return {
'statusCode': 400,
'body': json.dumps({'error': 'Email required'})
}
# Create user
user = save_user(body)
return {
'statusCode': 201,
'body': json.dumps(user)
}
Lambda Layers
# Create Lambda layer
aws lambda publish-layer-version \
--layer-name my-layer \
--zip-file fileb://layer.zip \
--compatible-runtimes python3.11
# In Lambda configuration
# Add layer: arn:aws:lambda:us-east-1:123456789012:layer:my-layer:1
Cold Start Optimization
# Keep connections outside handler
import boto3
import os
# Global variables (initialized once)
dynamodb = boto3.resource('dynamodb')
table_name = os.environ.get('TABLE_NAME')
def lambda_handler(event, context):
"""Handler function."""
table = dynamodb.Table(table_name)
# Use table
result = table.get_item(Key={'id': '123'})
return {'statusCode': 200, 'body': json.dumps(result Functions
### Function)}
Azure Triggers
# Azure Functions with Python
import azure.functions as func
import json
app = func.FunctionApp()
@app.route(route="users", methods=["GET", "POST"])
def get_users(req: func.HttpRequest) -> func.HttpResponse:
"""HTTP triggered function."""
if req.method == "GET":
# Return users
return func.HttpResponse(
json.dumps(get_users_list()),
mimetype="application/json"
)
else:
# Create user
req_body = req.get_json()
user = create_user(req_body)
return func.HttpResponse(
json.dumps(user),
status_code=201,
mimetype="application/json"
)
@app.queue_trigger(arg_name="myQueueItem", queue_name="orders")
def process_order(myQueueItem: str) -> None:
"""Queue triggered function."""
order = json.loads(myQueueItem)
process_order_logic(order)
@app.timer_trigger(schedule="0 0 * * * *")
def daily_job(myTimer: func.TimerRequest) -> None:
"""Timer triggered function."""
if myTimer.past_due:
run_daily_task()
Durable Functions
# Orchestration with Durable Functions
import azure.durable_functions as df
async def orchestrator_function(context: df.DurableOrchestrationContext):
"""Orchestrate workflow."""
# Step 1: Process input
input_data = context.get_input()
# Step 2: Call activity
result1 = await context.call_activity("ProcessData", input_data)
# Step 3: Conditional branching
if result1["status"] == "success":
result2 = await context.call_activity("SendNotification", result1)
else:
result2 = await context.call_activity("HandleError", result1)
return {"results": [result1, result2]}
main = df.Orchestrator.create(orchestrator_function)
Google Cloud Functions
HTTP Functions
# Google Cloud Functions
from flask import Flask, request, jsonify
app = Flask(__name__)
@app.route('/', methods=['POST'])
def handle_request():
"""Handle incoming request."""
data = request.get_json()
# Process data
result = process_data(data)
return jsonify(result)
# Or simpler format for 2nd gen
def handle_event(data, context):
"""Background event handler."""
event_id = context.event_id
timestamp = context.timestamp
process_background(data)
Event-Driven Patterns
Event Sources
| Source | Trigger Type | Use Case | |——–|————– API Gateway | HTTP | Web|———-| |/mobile backends | | S3 | Object created | File processing | | SQS | Message | Async workers | | DynamoDB | Stream | Data sync | | CloudWatch | Schedule | Cron jobs | | EventBridge | Events | Event routing |
Lambda Destinations
# Lambda destinations for async processing
# Configure in Lambda console or SAM
# On success
success_destination = {
"Type": "SQS",
"Destination": "arn:aws:sqs:us-east-1:123456789012:success-queue"
}
# On failure
failure_destination = {
"Type": "Lambda",
"Destination": "arn:aws:lambda:us-east-1:123456789012:error-handler"
}
Infrastructure as Code
AWS SAM Template
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: Serverless Application
Globals:
Function:
Timeout: 30
MemorySize: 256
Resources:
ProcessPaymentFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: src/
Handler: payment.process
Runtime: python3.11
Environment:
Variables:
TABLE_NAME: !Ref PaymentsTable
Events:
HttpApi:
Type: HttpApi
Properties:
Path: /payments
Method: POST
SQSQueue:
Type: SQS
Properties:
Queue: !GetAtt PaymentQueue.Arn
BatchSize: 10
PaymentsTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: payments
BillingMode: PAY_PER_REQUEST
AttributeDefinitions:
- AttributeName: id
AttributeType: S
KeySchema:
- AttributeName: id
KeyType: HASH
Serverless Framework
# serverless.yml
service: my-service
provider:
name: aws
runtime: python3.11
stage: ${opt:stage, 'dev'}
environment:
TABLE_NAME: ${self:service}-${self:provider.stage}
functions:
processPayment:
handler: payment.process
events:
- http:
path: /payments
method: post
- sqs:
arn: !GetAtt PaymentQueue.Arn
batchSize: 10
resources:
- Effect: Allow
Action:
- dynamodb:PutItem
Resource: !GetAtt PaymentsTable.Arn
resources:
Resources:
PaymentsTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: ${self:service}-${self:provider.stage}
BillingMode: PAY_PER_REQUEST
Cost Optimization
Pricing Comparison
| Provider | Invocations | Compute Time | Free Tier |
|---|---|---|---|
| AWS Lambda | $0.20/million | $0.000016667/GB-s | 1M + 400K GB-s |
| Azure Functions | $0.20/million | $0.000016/GB-s | 1M + 400K GB-s |
| Google Cloud | $0.40/million | $0.0000125/GB-s | 2M + 400K GB-s |
Optimization Strategies
# Optimize Lambda memory
# More memory = more CPU = faster execution
# Often cheaper to use more memory for shorter time
def calculate_optimal_memory(execution_time_ms, memory_mb):
"""Calculate if increasing memory saves money."""
cost_per_ms_low = (memory_mb / 1024) * 0.000016667
cost_current = (execution_time_ms / 1000) * cost_per_ms_low
# Try double memory
memory_high = memory_mb * 2
# Estimate 40% faster
time_high = execution_time_ms * 0.6
cost_per_ms_high = (memory_high / 1024) * 0.000016667
cost_high = (time_high / 1000) * cost_per_ms_high
return {
"current_cost": cost_current,
"optimized_cost": cost_high,
"savings": cost_current - cost_high
}
Best Practices
- Keep functions small: Single responsibility
- Avoid cold starts: Use provisioned concurrency
- Use destinations: For async processing
- Implement idempotency: Handle retries safely
- Set appropriate timeout: Match workload
- Use layers: Share common code
- Monitor costs: Track invocation and duration
Conclusion
Serverless architecture enables building scalable applications with minimal operational overhead. By understanding trigger types, implementing proper patterns, and optimizing costs, teams can leverage serverless for everything from APIs to background processing.
Comments