Introduction
Container orchestration has become essential for deploying and managing containerized applications at scale. Kubernetes has emerged as the dominant orchestration platform, with managed versions from all major cloud providers. Understanding container orchestration is crucial for modern cloud-native development.
This guide examines container orchestration with Kubernetes. We explore core concepts, cluster management, workload deployment, networking, storage, and operational best practices. Whether deploying your first cluster or optimizing existing implementations, this guide provides the knowledge necessary for success.
Kubernetes Architecture
Cluster Components
graph TB
subgraph "Control Plane"
API[API Server]
etcd[etcd]
sched[Scheduler]
ctrl[Controller Manager]
end
subgraph "Worker Nodes"
kubelet[Kubelet]
kube-proxy[Kube-proxy]
containerd[Container Runtime]
pods[Pods]
end
API --> etcd
API --> sched
API --> ctrl
sched --> kubelet
kubelet --> containerd
containerd --> pods
kube-proxy --> pods
Control Plane Components
| Component | Function |
|---|---|
| API Server | REST API for cluster management |
| etcd | Distributed data store |
| Scheduler | Pod placement decisions |
| Controller Manager | Reconciles desired state |
Worker Node Components
| Component | Function |
|---|---|
| Kubelet | Agent managing containers |
| Kube-proxy | Network proxy, load balancing |
| Container Runtime | Runs containers (containerd, CRI-O) |
Core Concepts
Pods
apiVersion: v1
kind: Pod
metadata:
name: order-service
labels:
app: order-service
version: v1
spec:
containers:
- name: order-service
image: myregistry/order-service:v1.2.3
ports:
- containerPort: 8080
name: http
- containerPort: 9090
name: grpc
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: db-secrets
key: url
- name: LOG_LEVEL
value: "info"
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
Deployments
apiVersion: apps/v1
kind: Deployment
metadata:
name: order-service
spec:
replicas: 3
selector:
matchLabels:
app: order-service
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
metadata:
labels:
app: order-service
version: v1
spec:
containers:
- name: order-service
image: myregistry/order-service:v1.2.3
ports:
- containerPort: 8080
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
Services
apiVersion: v1
kind: Service
metadata:
name: order-service
spec:
type: ClusterIP
selector:
app: order-service
ports:
- name: http
port: 80
targetPort: 8080
- name: grpc
port: 9090
targetPort: 9090
---
apiVersion: v1
kind: Service
metadata:
name: order-service-lb
spec:
type: LoadBalancer
selector:
app: order-service
ports:
- port: 80
targetPort: 8080
ConfigMaps and Secrets
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
database.host: "db.example.com"
database.port: "5432"
log.level: "info"
---
apiVersion: v1
kind: Secret
metadata:
name: db-credentials
type: Opaque
stringData:
username: admin
password: ${PASSWORD}
Managed Kubernetes Services
Amazon EKS
# Create EKS cluster
eksctl create cluster \
--name production-cluster \
--version 1.29 \
--region us-east-1 \
--nodegroup-name standard-workers \
--node-type t3.medium \
--nodes 3 \
--nodes-min 2 \
--nodes-max 10
# EKS Add-ons
apiVersion: eks.aws/v1alpha1
kind: Addon
metadata:
name: vpc-cni
clusterName: production-cluster
spec:
addonVersion: v1.16.4-eksbuild.1
serviceAccountRoleARN: arn:aws:iam::123456789012:role/EKSCNIRole
Azure AKS
# Create AKS cluster
az aks create `
--name production-cluster `
--resource-group rg-production `
--location eastus `
--kubernetes-version 1.29 `
--node-count 3 `
--node-vm-size Standard_D4s_v3 `
--enable-addons monitoring `
--enable-managed-identity
Google GKE
# Create GKE cluster
gcloud container clusters create production-cluster \
--location us-east1 \
--node-locations us-east1-a,us-east1-b \
--release-channel stable \
--machine-type e2-standard-4 \
--num-nodes 3 \
--enable-autoscaling \
--min-nodes 2 \
--max-nodes 10
Networking
Network Policies
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: order-service-policy
spec:
podSelector:
matchLabels:
app: order-service
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: api-gateway
ports:
- protocol: TCP
port: 8080
egress:
- to:
- podSelector:
matchLabels:
app: database
ports:
- protocol: TCP
port: 5432
Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: main-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
tls:
- hosts:
- api.example.com
secretName: api-tls
rules:
- host: api.example.com
http:
paths:
- path: /users
pathType: Prefix
backend:
service:
name: user-service
port:
number: 80
- path: /orders
pathType: Prefix
backend:
service:
name: order-service
port:
number: 80
Storage
PersistentVolumeClaims
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: gp3
---
apiVersion: v1
kind: Pod
metadata:
name: app-with-volume
spec:
containers:
- name: app
image: myapp:latest
volumeMounts:
- name: data
mountPath: /data
volumes:
- name: data
persistentVolumeClaim:
claimName: data-pvc
Workload Management
Horizontal Pod Autoscaler
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: order-service-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: order-service
minReplicas: 2
maxReplicas: 20
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
behavior:
scaleDown:
stabilizationWindowSeconds: 300
policies:
- type: Percent
value: 10
periodSeconds: 60
scaleUp:
stabilizationWindowSeconds: 0
policies:
- type: Percent
value: 100
periodSeconds: 15
- type: Pods
value: 4
periodSeconds: 15
selectPolicy: Max
Resource Quotas
apiVersion: v1
kind: ResourceQuota
metadata:
name: production-quota
spec:
hard:
requests.cpu: "20"
requests.memory: 40Gi
limits.cpu: "40"
limits.memory: 80Gi
pods: "50"
services: "10"
secrets: "20"
configmaps: "20"
LimitRanges
apiVersion: v1
kind: LimitRange
metadata:
name: default-limits
spec:
limits:
- max:
cpu: "4"
memory: 8Gi
min:
cpu: 50m
memory: 64Mi
default:
cpu: 200m
memory: 256Mi
defaultRequest:
cpu: 100m
memory: 128Mi
type: Container
Cluster Management
RBAC
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: production
name: order-service-role
rules:
- apiGroups: [""]
resources: ["pods", "services", "configmaps"]
verbs: ["get", "list", "watch"]
- apiGroups: ["apps"]
resources: ["deployments", "replicasets"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: order-service-binding
namespace: production
subjects:
- kind: ServiceAccount
name: order-service
namespace: production
roleRef:
kind: Role
name: order-service-role
apiGroup: rbac.authorization.k8s.io
Pod Security Standards
apiVersion: v1
kind: Namespace
metadata:
name: production
labels:
pod-security.kubernetes.io/enforce: restricted
pod-security.kubernetes.io/audit: restricted
pod-security.kubernetes.io/warn: restricted
GitOps with ArgoCD
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: order-service
namespace: argocd
spec:
project: default
source:
repoURL: https://github.com/org/app-config
targetRevision: main
path: production
destination:
server: https://kubernetes.default.svc
namespace: production
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
Monitoring
Prometheus Operator
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: order-service
labels:
release: prometheus
spec:
selector:
matchLabels:
app: order-service
endpoints:
- port: http
path: /metrics
interval: 15s
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: order-service-alerts
spec:
groups:
- name: order-service
rules:
- alert: HighErrorRate
expr: |
sum(rate(http_requests_total{service="order-service",status=~"5.."}[5m]))
/
sum(rate(http_requests_total{service="order-service"}[5m])) > 0.05
for: 5m
labels:
severity: critical
annotations:
summary: "High error rate detected"
Conclusion
Container orchestration with Kubernetes provides powerful capabilities for deploying and managing containerized applications. Understanding core concepts—pods, deployments, services—enables building robust containerized applications. Managed Kubernetes services from cloud providers simplify operations while providing enterprise features.
Key practices include implementing proper resource management, using network policies for security, deploying with GitOps for reliability, and monitoring for observability. The investment in container orchestration pays dividends through improved deployment velocity and operational efficiency.
As container adoption grows, Kubernetes expertise becomes increasingly valuable. Build strong foundations with core concepts, then progressively adopt advanced patterns as your needs evolve.
Comments