Skip to main content
โšก Calmops

CI/CD Pipelines: Building Automated Delivery Pipelines

Introduction

Continuous Integration and Continuous Delivery (CI/CD) pipelines automate the process of building, testing, and deploying software. Well-designed pipelines catch issues early, ensure consistent quality, and enable rapid, reliable releases.

This guide covers pipeline design principles, GitHub Actions and GitLab CI configurations, testing strategies, deployment patterns, and quality gates.

Pipeline Design Principles

Pipeline Stages

A typical CI/CD pipeline consists of several stages, each with a specific purpose.

# GitHub Actions workflow example
name: CI/CD Pipeline

on:
  push:
    branches: [main, develop]
  pull_request:
    branches: [main]

env:
  REGISTRY: ghcr.io
  IMAGE_NAME: ${{ github.repository }}

jobs:
  lint:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      
      - name: Set up Python
        uses: actions/setup-python@v5
        with:
          python-version: '3.11'
      
      - name: Install dependencies
        run: |
          python -m pip install --upgrade pip
          pip install flake8 black mypy
      
      - name: Lint with flake8
        run: flake8 . --max-line-length=100
      
      - name: Check formatting with black
        run: black --check .
      
      - name: Type checking with mypy
        run: mypy .

  test:
    needs: lint
    runs-on: ubuntu-latest
    services:
      postgres:
        image: postgres:15
        env:
          POSTGRES_USER: test
          POSTGRES_PASSWORD: test
          POSTGRES_DB: test_db
        ports: ['5432:5432']
        options: >-
          --health-cmd pg_isready
          --health-interval 10s
          --health-timeout 5s
          --health-retries 5
    
    steps:
      - uses: actions/checkout@v4
      
      - name: Set up Python
        uses: actions/setup-python@v5
        with:
          python-version: '3.11'
          cache: 'pip'
      
      - name: Install dependencies
        run: |
          pip install -r requirements.txt
          pip install pytest pytest-cov
      
      - name: Run database migrations
        run: |
          export DATABASE_URL=postgresql://test:test@localhost:5432/test_db
          alembic upgrade head
      
      - name: Run tests with pytest
        run: |
          pytest --cov=app --cov-report=xml --cov-report=term-missing
      
      - name: Upload coverage to Codecov
        uses: codecov/codecov-action@v3
        with:
          files: ./coverage.xml

  build:
    needs: test
    runs-on: ubuntu-latest
    if: github.event_name == 'push' && github.ref == 'refs/heads/main'
    
    steps:
      - uses: actions/checkout@v4
      
      - name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v3
      
      - name: Log in to container registry
        uses: docker/login-action@v3
        with:
          registry: ${{ env.REGISTRY }}
          username: ${{ github.actor }}
          password: ${{ secrets.GITHUB_TOKEN }}
      
      - name: Extract metadata for Docker
        id: meta
        uses: docker/metadata-action@v5
        with:
          images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
          tags: |
            type=sha
            type=ref,event=branch
            type=raw,value=latest,enable={{is_default_branch}}
      
      - name: Build and push Docker image
        uses: docker/build-push-action@v5
        with:
          context: .
          push: true
          tags: ${{ steps.meta.outputs.tags }}
          labels: ${{ steps.meta.outputs.labels }}
          cache-from: type=gha
          cache-to: type=gha,mode=max

  deploy-staging:
    needs: build
    runs-on: ubuntu-latest
    environment: staging
    
    steps:
      - name: Deploy to staging
        run: |
          echo "Deploying to staging environment..."
          # kubectl apply -f k8s/staging/
          # or use deployment scripts

  deploy-production:
    needs: deploy-staging
    runs-on: ubuntu-latest
    environment: production
    
    steps:
      - name: Deploy to production
        run: |
          echo "Deploying to production environment..."

GitLab CI Configuration

# .gitlab-ci.yml
stages:
  - lint
  - test
  - build
  - security
  - deploy

variables:
  DOCKER_DRIVER: overlay2
  DOCKER_TLS_CERTDIR: ""

lint:
  stage: lint
  image: python:3.11
  script:
    - pip install flake8 black mypy
    - flake8 .
    - black --check .
    - mypy .
  cache:
    key: ${CI_COMMIT_REF_SLUG}
    paths:
      - .cache/pip

test:
  stage: test
  image: python:3.11
  services:
    - postgres:15
  variables:
    POSTGRES_DB: test
    POSTGRES_USER: test
    POSTGRES_PASSWORD: test
    DATABASE_URL: postgresql://test:test@postgres:5432/test
  script:
    - pip install -r requirements.txt pytest pytest-cov
    - pytest --cov=app --cov-report=term-missing --cov-report=html
  artifacts:
    reports:
      junit: test-results.xml
      coverage_report:
        coverage_format: cobertura
        path: coverage.xml

build:
  stage: build
  image: docker:24
  services:
    - docker:24-dind
  script:
    - docker build -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA .
    - docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
    - docker tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA $CI_REGISTRY_IMAGE:latest
    - docker push $CI_REGISTRY_IMAGE:latest

security:
  stage: security
  image: 
    name: aquasec/trivy:latest
    entrypoint: [""]
  script:
    - trivy image --exit-code 1 --severity HIGH,CRITICAL $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
  allow_failure: true

deploy_staging:
  stage: deploy
  script:
    - echo "Deploying to staging..."
  environment:
    name: staging
  only:
    - develop

deploy_production:
  stage: deploy
  script:
    - echo "Deploying to production..."
  environment:
    name: production
  when: manual
  only:
    - main

Testing Strategies

Test Pyramid

# tests/conftest.py
import pytest
from unittest.mock import Mock, patch
from fastapi.testclient import TestClient
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker

@pytest.fixture
def mock_user_repository():
    """Mock user repository."""
    repo = Mock()
    repo.find_by_email.return_value = None
    repo.find_by_id.return_value = None
    return repo

@pytest.fixture
def test_db():
    """Create test database."""
    engine = create_engine("sqlite:///:memory:")
    TestingSessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
    
    # Create tables
    from models.base import Base
    Base.metadata.create_all(bind=engine)
    
    yield TestingSessionLocal
    
    # Cleanup
    Base.metadata.drop_all(bind=engine)

@pytest.fixture
def client(test_db):
    """Create test client."""
    from main import app
    from dependencies import get_db
    
    def override_get_db():
        db = test_db()
        try:
            yield db
        finally:
            db.close()
    
    app.dependency_overrides[get_db] = override_get_db
    
    with TestClient(app) as client:
        yield client
    
    app.dependency_overrides.clear()

# tests/test_users.py
class TestUserAPI:
    """Test user API endpoints."""
    
    def test_create_user_success(self, client):
        """Test successful user creation."""
        response = client.post(
            "/api/v1/users",
            json={
                "email": "[email protected]",
                "name": "Test User",
                "password": "securepassword123"
            }
        )
        
        assert response.status_code == 201
        data = response.json()
        assert data["email"] == "[email protected]"
        assert data["name"] == "Test User"
        assert "id" in data
        assert "password" not in data
    
    def test_create_user_invalid_email(self, client):
        """Test user creation with invalid email."""
        response = client.post(
            "/api/v1/users",
            json={
                "email": "invalid-email",
                "name": "Test",
                "password": "password123"
            }
        )
        
        assert response.status_code == 422
    
    def test_create_user_duplicate_email(self, client, mock_user_repository):
        """Test user creation with existing email."""
        with patch("services.user_service.UserRepository", return_value=mock_user_repository):
            mock_user_repository.find_by_email.return_value = Mock(id="existing")
            
            response = client.post(
                "/api/v1/users",
                json={
                    "email": "[email protected]",
                    "name": "Test",
                    "password": "password123"
                }
            )
            
            assert response.status_code == 400
            assert "already exists" in response.json()["detail"]

Deployment Patterns

Blue-Green Deployment

# Blue-Green deployment with GitHub Actions
name: Blue-Green Deployment

on:
  push:
    branches: [main]

jobs:
  deploy:
    runs-on: ubuntu-latest
    environment: production
    
    steps:
      - uses: actions/checkout@v4
      
      - name: Configure AWS credentials
        uses: aws-actions/configure-aws-credentials@v4
        with:
          aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
          aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          aws-region: us-east-1
      
      - name: Deploy to blue environment
        run: |
          # Deploy to blue (current production)
          kubectl apply -f k8s/blue/ --namespace production
          
          # Wait for blue to be ready
          kubectl rollout status deployment/app-blue --namespace production
      
      - name: Run smoke tests
        run: |
          # Test blue environment
          BLUE_URL=$(kubectl get svc app-blue -n production -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
          curl -f "http://${BLUE_URL}/health"
      
      - name: Switch traffic to blue
        run: |
          # Update service to point to blue
          kubectl patch svc app -n production -p '{"spec":{"selector":{"app":"app-blue"}}}'
      
      - name: Verify traffic switch
        run: |
          sleep 30
          kubectl get pods -n production
          kubectl logs -l app=app-blue -n production --tail=100

Canary Deployment

# Canary deployment configuration
apiVersion: apps/v1
kind: Deployment
metadata:
  name: app-canary
spec:
  replicas: 1
  selector:
    matchLabels:
      app: app
      version: canary
  template:
    metadata:
      labels:
        app: app
        version: canary
    spec:
      containers:
      - name: app
        image: app:latest
        resources:
          limits:
            cpu: "500m"
            memory: "512Mi"
---
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: app-canary-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: app-canary
  minReplicas: 1
  maxReplicas: 10
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 50
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: app-ingress
  annotations:
    nginx.ingress.kubernetes.io/canary: "true"
    nginx.ingress.kubernetes.io/canary-weight: "10"
spec:
  ingressClassName: nginx
  rules:
  - host: api.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: app-canary
            port:
              number: 80

Quality Gates

# Quality gate job
quality-gates:
  runs-on: ubuntu-latest
  
  steps:
    - uses: actions/checkout@v4
    
    - name: Check test coverage
      run: |
        COVERAGE=$(cat coverage.xml | grep -oP '(?<=line-rate=")[^"]*' | head -1)
        if (( $(echo "$COVERAGE < 0.8" | bc -l) )); then
          echo "Coverage $COVERAGE is below 80%"
          exit 1
        fi
    
    - name: Check security vulnerabilities
      run: |
        pip install safety
        safety check -r requirements.txt --exit-code > safety_output || true
        if grep -q "Vulnerability" safety_output; then
          echo "Security vulnerabilities found"
          exit 1
        fi
    
    - name: Check code complexity
      run: |
        pip install radon
        radon cc app -a --min=10
        if [ $? -eq 0 ]; then
          echo "Functions with high complexity found"
          exit 1
        fi
    
    - name: Check documentation
      run: |
        # Ensure all public functions have docstrings
        pip install pydocstyle
        pydocstyle app --count=0
        if [ $? -eq 1 ]; then
          echo "Missing docstrings found"
          exit 1
        fi

Conclusion

Well-designed CI/CD pipelines are essential for modern software delivery. Key practices: automate everything, run tests early and often, use quality gates to prevent regressions, implement safe deployment strategies (blue-green, canary), and monitor deployments. CI/CD is not just about automation but about building confidence in every change.

Resources

  • GitHub Actions Documentation
  • GitLab CI/CD Documentation
  • “Accelerate” by Nicole Forsgren
  • Google’s Site Reliability Engineering book

Comments