Skip to main content
โšก Calmops

Python Logging: Configuration and Best Practices for Production Applications

Python Logging: Configuration and Best Practices for Production Applications

Introduction

Imagine your application is running in production, and something goes wrong. Users report errors, but you have no idea what happened. Your code has no logging, so you’re flying blind. This scenario is all too common, yet it’s entirely preventable with proper logging.

Logging is the practice of recording events that occur during program execution. It’s not just for debuggingโ€”it’s essential for understanding how your application behaves in production, diagnosing problems, monitoring performance, and maintaining system health.

Python’s built-in logging module is powerful and flexible, but many developers either ignore it or use it incorrectly. In this guide, we’ll explore how to configure logging properly, implement best practices, and build logging systems that scale from development through production. By the end, you’ll have the knowledge to implement professional-grade logging in your applications.


Part 1: Understanding Python’s Logging Architecture

The Logging Hierarchy

Python’s logging module consists of four main components:

1. Loggers - Create log records

import logging

logger = logging.getLogger(__name__)
logger.info("This is a log message")

2. Handlers - Send log records to destinations (console, files, etc.)

handler = logging.StreamHandler()  # Console output
file_handler = logging.FileHandler('app.log')  # File output

3. Formatters - Format log records for display

formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
handler.setFormatter(formatter)

4. Filters - Control which records are logged

class InfoFilter(logging.Filter):
    def filter(self, record):
        return record.levelno == logging.INFO

Logging Levels

Python defines five standard logging levels:

import logging

logging.DEBUG      # 10 - Detailed information for debugging
logging.INFO       # 20 - General informational messages
logging.WARNING    # 30 - Warning messages (default level)
logging.ERROR      # 40 - Error messages
logging.CRITICAL   # 50 - Critical errors

# Usage
logger.debug("Debug message")
logger.info("Info message")
logger.warning("Warning message")
logger.error("Error message")
logger.critical("Critical message")

When to use each level:

  • DEBUG: Detailed information useful for diagnosing problems
  • INFO: Confirmation that things are working as expected
  • WARNING: Something unexpected happened or may happen
  • ERROR: A serious problem; the software has not performed some function
  • CRITICAL: A serious error; the program itself may not continue running

Part 2: Basic Logging Configuration

The Simplest Approach

import logging

# Configure basic logging
logging.basicConfig(
    level=logging.INFO,
    format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
)

logger = logging.getLogger(__name__)
logger.info("Application started")

Important: Call basicConfig() before creating loggers. It only works once.

Programmatic Configuration

For more control, configure logging programmatically:

import logging

# Create logger
logger = logging.getLogger('myapp')
logger.setLevel(logging.DEBUG)

# Create console handler
console_handler = logging.StreamHandler()
console_handler.setLevel(logging.INFO)

# Create file handler
file_handler = logging.FileHandler('app.log')
file_handler.setLevel(logging.DEBUG)

# Create formatter
formatter = logging.Formatter(
    '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
)

# Add formatter to handlers
console_handler.setFormatter(formatter)
file_handler.setFormatter(formatter)

# Add handlers to logger
logger.addHandler(console_handler)
logger.addHandler(file_handler)

# Now use the logger
logger.debug("Debug message")
logger.info("Info message")

Dictionary Configuration

For complex setups, use dictionary configuration:

import logging.config

LOGGING_CONFIG = {
    'version': 1,
    'disable_existing_loggers': False,
    'formatters': {
        'standard': {
            'format': '%(asctime)s [%(levelname)s] %(name)s: %(message)s'
        },
        'detailed': {
            'format': '%(asctime)s [%(levelname)s] %(name)s:%(filename)s:%(funcName)s:%(lineno)d - %(message)s'
        },
    },
    'handlers': {
        'console': {
            'class': 'logging.StreamHandler',
            'level': 'INFO',
            'formatter': 'standard',
            'stream': 'ext://sys.stdout'
        },
        'file': {
            'class': 'logging.handlers.RotatingFileHandler',
            'level': 'DEBUG',
            'formatter': 'detailed',
            'filename': 'app.log',
            'maxBytes': 10485760,  # 10MB
            'backupCount': 5
        },
    },
    'loggers': {
        '': {  # Root logger
            'handlers': ['console', 'file'],
            'level': 'DEBUG',
            'propagate': True
        }
    }
}

logging.config.dictConfig(LOGGING_CONFIG)
logger = logging.getLogger(__name__)

Configuration from File

Store configuration in a file for easy management:

# logging.ini
[loggers]
keys=root,myapp

[handlers]
keys=console,file

[formatters]
keys=standard

[logger_root]
level=DEBUG
handlers=console,file

[logger_myapp]
level=DEBUG
handlers=console,file
qualname=myapp
propagate=0

[handler_console]
class=StreamHandler
level=INFO
formatter=standard
args=(sys.stdout,)

[handler_file]
class=handlers.RotatingFileHandler
level=DEBUG
formatter=standard
args=('app.log', 'a', 10485760, 5)

[formatter_standard]
format=%(asctime)s [%(levelname)s] %(name)s: %(message)s

Load the configuration:

import logging.config

logging.config.fileConfig('logging.ini')
logger = logging.getLogger(__name__)

Part 3: Best Practices

1. Use Module-Level Loggers

# Good: Use __name__ to create module-specific loggers
import logging

logger = logging.getLogger(__name__)

def process_data(data):
    logger.info(f"Processing data: {data}")
    # ... process data

# Bad: Using root logger
logging.info("Processing data")

2. Structured Logging

Include relevant context in log messages:

import logging
import json

logger = logging.getLogger(__name__)

# Good: Structured information
def process_user(user_id, action):
    logger.info(
        "User action",
        extra={
            'user_id': user_id,
            'action': action,
            'timestamp': datetime.now().isoformat()
        }
    )

# Better: Use JSON formatting for machine parsing
class JSONFormatter(logging.Formatter):
    def format(self, record):
        log_data = {
            'timestamp': self.formatTime(record),
            'level': record.levelname,
            'logger': record.name,
            'message': record.getMessage(),
            'module': record.module,
            'function': record.funcName,
            'line': record.lineno
        }
        return json.dumps(log_data)

handler = logging.StreamHandler()
handler.setFormatter(JSONFormatter())
logger.addHandler(handler)

3. Log Rotation

Prevent log files from growing too large:

import logging.handlers

# Rotate by size
handler = logging.handlers.RotatingFileHandler(
    'app.log',
    maxBytes=10485760,  # 10MB
    backupCount=5       # Keep 5 backup files
)

# Rotate by time
handler = logging.handlers.TimedRotatingFileHandler(
    'app.log',
    when='midnight',    # Rotate at midnight
    interval=1,         # Every day
    backupCount=7       # Keep 7 days of logs
)

logger = logging.getLogger(__name__)
logger.addHandler(handler)

4. Exception Logging

Capture full exception information:

import logging

logger = logging.getLogger(__name__)

try:
    result = 10 / 0
except ZeroDivisionError:
    # Good: Includes full traceback
    logger.exception("Division by zero error")
    
    # Alternative: Manually include traceback
    logger.error("Division by zero error", exc_info=True)

5. Performance Considerations

Avoid expensive operations in log messages:

import logging

logger = logging.getLogger(__name__)

# Bad: Expensive operation happens even if not logged
logger.debug(f"Processing {expensive_function()}")

# Good: Use lazy evaluation
logger.debug("Processing %s", expensive_function)

# Better: Check log level first
if logger.isEnabledFor(logging.DEBUG):
    logger.debug(f"Processing {expensive_function()}")

6. Contextual Information

Use context managers for request-specific logging:

import logging
from contextvars import ContextVar

# Store request ID in context
request_id_var = ContextVar('request_id', default=None)

class RequestIDFilter(logging.Filter):
    def filter(self, record):
        record.request_id = request_id_var.get()
        return True

# Configure formatter to include request ID
formatter = logging.Formatter(
    '%(asctime)s [%(request_id)s] %(levelname)s: %(message)s'
)

handler = logging.StreamHandler()
handler.setFormatter(formatter)
handler.addFilter(RequestIDFilter())

logger = logging.getLogger(__name__)
logger.addHandler(handler)

# Usage
def handle_request(request_id):
    request_id_var.set(request_id)
    logger.info("Processing request")

Part 4: Production-Ready Logging

Environment-Specific Configuration

import logging
import os

def setup_logging():
    """Configure logging based on environment"""
    env = os.getenv('ENVIRONMENT', 'development')
    
    if env == 'production':
        level = logging.WARNING
        format_string = '%(asctime)s [%(levelname)s] %(name)s: %(message)s'
    elif env == 'staging':
        level = logging.INFO
        format_string = '%(asctime)s [%(levelname)s] %(name)s: %(message)s'
    else:  # development
        level = logging.DEBUG
        format_string = '%(asctime)s [%(levelname)s] %(name)s:%(filename)s:%(funcName)s:%(lineno)d - %(message)s'
    
    logging.basicConfig(
        level=level,
        format=format_string
    )

setup_logging()

Logging in Web Applications

import logging
from flask import Flask, request, g
import uuid

app = Flask(__name__)

# Configure logging
logging.basicConfig(
    level=logging.INFO,
    format='%(asctime)s [%(request_id)s] %(levelname)s: %(message)s'
)

logger = logging.getLogger(__name__)

@app.before_request
def before_request():
    """Add request ID to context"""
    g.request_id = str(uuid.uuid4())
    logger.info(f"Request started: {request.method} {request.path}")

@app.after_request
def after_request(response):
    """Log response"""
    logger.info(f"Request completed: {response.status_code}")
    return response

@app.route('/api/users/<user_id>')
def get_user(user_id):
    logger.info(f"Fetching user {user_id}")
    try:
        user = fetch_user(user_id)
        logger.info(f"User {user_id} fetched successfully")
        return user
    except Exception as e:
        logger.exception(f"Error fetching user {user_id}")
        return {'error': str(e)}, 500

Logging in Microservices

import logging
import json
from pythonjsonlogger import jsonlogger

# Use JSON logging for easy parsing in log aggregation systems
logger = logging.getLogger()
logHandler = logging.StreamHandler()
formatter = jsonlogger.JsonFormatter()
logHandler.setFormatter(formatter)
logger.addHandler(logHandler)
logger.setLevel(logging.INFO)

# Log with structured data
logger.info("User created", extra={
    'user_id': '12345',
    'service': 'user-service',
    'action': 'create',
    'duration_ms': 150
})

Part 5: Common Pitfalls

Pitfall 1: Not Configuring Logging

# Bad: No configuration
import logging
logging.info("This won't appear!")  # Won't show up

# Good: Configure first
import logging
logging.basicConfig(level=logging.INFO)
logging.info("This will appear!")

Pitfall 2: Using String Formatting

# Bad: String formatting happens even if not logged
logger.debug("User: " + user.name + " Action: " + action)

# Good: Use lazy formatting
logger.debug("User: %s Action: %s", user.name, action)

Pitfall 3: Logging Sensitive Information

# Bad: Logging passwords and tokens
logger.info(f"User login: {username} {password}")

# Good: Sanitize sensitive data
logger.info(f"User login: {username}")

Pitfall 4: Ignoring Log Levels

# Bad: Everything at INFO level
logger.info("Debug info")
logger.info("Warning info")
logger.info("Error info")

# Good: Use appropriate levels
logger.debug("Debug info")
logger.warning("Warning info")
logger.error("Error info")

Pitfall 5: Not Handling Log File Growth

# Bad: Single log file grows indefinitely
handler = logging.FileHandler('app.log')

# Good: Use rotating file handler
handler = logging.handlers.RotatingFileHandler(
    'app.log',
    maxBytes=10485760,
    backupCount=5
)

Part 6: Advanced Patterns

Custom Filters

import logging

class SensitiveDataFilter(logging.Filter):
    """Remove sensitive data from logs"""
    def filter(self, record):
        # Redact passwords
        record.msg = record.msg.replace('password=', 'password=***')
        return True

logger = logging.getLogger(__name__)
handler = logging.StreamHandler()
handler.addFilter(SensitiveDataFilter())
logger.addHandler(handler)

Custom Handlers

import logging
import smtplib
from email.mime.text import MIMEText

class EmailHandler(logging.Handler):
    """Send critical errors via email"""
    def emit(self, record):
        if record.levelno >= logging.CRITICAL:
            msg = MIMEText(self.format(record))
            msg['Subject'] = f"Critical Error: {record.getMessage()}"
            msg['From'] = '[email protected]'
            msg['To'] = '[email protected]'
            
            # Send email
            # (implementation details omitted)

logger = logging.getLogger(__name__)
logger.addHandler(EmailHandler())

Integration with Monitoring Systems

import logging
from pythonjsonlogger import jsonlogger

# Configure for ELK Stack, Datadog, or similar
handler = logging.StreamHandler()
formatter = jsonlogger.JsonFormatter()
handler.setFormatter(formatter)

logger = logging.getLogger(__name__)
logger.addHandler(handler)

# Logs are now in JSON format, easily parsed by monitoring systems
logger.info("Event occurred", extra={
    'event_type': 'user_signup',
    'user_id': '12345',
    'timestamp': '2025-12-16T10:30:00Z'
})

Conclusion

Proper logging is not optionalโ€”it’s essential for building reliable, maintainable applications. By implementing the practices outlined in this guide, you’ll create logging systems that:

  • Facilitate debugging: Quickly identify and fix issues
  • Enable monitoring: Track application health and performance
  • Support compliance: Maintain audit trails and security logs
  • Improve operations: Help your team understand what’s happening in production

Key takeaways:

  • Use module-level loggers with __name__ for better organization
  • Configure logging properly using one of the three methods (basic, programmatic, or dictionary)
  • Choose appropriate log levels for different types of messages
  • Implement log rotation to prevent disk space issues
  • Use structured logging for production systems
  • Avoid common pitfalls like logging sensitive data or using string formatting
  • Adapt configuration for your environment (development, staging, production)

Start implementing these practices today, and you’ll build applications that are easier to debug, monitor, and maintain. Your future selfโ€”and your operations teamโ€”will thank you.

Happy logging!

Comments