Skip to main content
โšก Calmops

Financial Data Analysis: Pandas and Quantitative Finance

Introduction

Quantitative finance combines mathematical models, statistical analysis, and computational tools to analyze financial markets and securities. Python, with its rich ecosystem of libraries including Pandas, NumPy, and specialized finance packages, has become the dominant language for quantitative analysis.

Whether you’re building algorithmic trading systems, managing investment portfolios, assessing risk, or analyzing market data, Python provides the tools necessary for sophisticated financial analysis. This guide covers the essential techniques and code patterns that quantitative analysts use daily.

We’ll explore financial data structures, time series analysis, risk metrics, portfolio optimization, and practical trading strategy implementation. Each section includes working code examples that you can adapt for your own analysis.


Financial Data Structures

Understanding how to represent and manipulate financial data is foundational to quantitative analysis.

Price Data Types

Financial data comes in several forms:

OHLCV Data: Open, High, Low, Close, Volumeโ€”the standard format for historical price data.

import pandas as pd
import numpy as np
from datetime import datetime, timedelta

# Create sample OHLCV data
np.random.seed(42)

dates = pd.date_range(start='2024-01-01', end='2024-12-31', freq='D')
n = len(dates)

# Generate random walk prices
returns = np.random.normal(0.0005, 0.02, n)
price = 100 * np.exp(np.cumsum(returns))

# Create OHLCV DataFrame
data = pd.DataFrame({
    'date': dates,
    'open': price * (1 + np.random.uniform(-0.01, 0.01, n)),
    'high': price * (1 + np.random.uniform(0.00, 0.02, n)),
    'low': price * (1 + np.random.uniform(-0.02, 0.00, n)),
    'close': price,
    'volume': np.random.randint(1000000, 10000000, n)
})

# Ensure high is highest and low is lowest
data['high'] = data[['open', 'high', 'close']].max(axis=1)
data['low'] = data[['open', 'low', 'close']].min(axis=1)

data.set_index('date', inplace=True)
print(data.head(10))

Tick Data: Individual transaction records with timestamp, price, and volume.

# Generate tick data
def generate_tick_data(symbol: str, n_ticks: int) -> pd.DataFrame:
    """Generate synthetic tick data."""
    timestamps = pd.date_range(
        start='2024-01-01 09:30',
        periods=n_ticks,
        freq='100ms'
    )
    
    # Random walk for price
    returns = np.random.normal(0, 0.0001, n_ticks)
    price = 100 + np.cumsum(returns)
    
    return pd.DataFrame({
        'timestamp': timestamps,
        'symbol': symbol,
        'price': price,
        'size': np.random.randint(1, 1000, n_ticks),
        'side': np.random.choice(['buy', 'sell'], n_ticks)
    })

tick_data = generate_tick_data('AAPL', 10000)
print(tick_data.head())

Working with OHLCV Data

class FinancialData:
    """Utilities for financial data manipulation."""
    
    @staticmethod
    def resample_ohlcv(
        data: pd.DataFrame,
        freq: str
    ) -> pd.DataFrame:
        """Resample OHLCV data to different timeframe."""
        ohlc_dict = {
            'open': 'first',
            'high': 'max',
            'low': 'min',
            'close': 'last',
            'volume': 'sum'
        }
        
        resampled = data.resample(freq).apply(ohlc_dict)
        return resampled.dropna()
    
    @staticmethod
    def calculate_returns(prices: pd.Series) -> pd.Series:
        """Calculate simple returns."""
        return prices.pct_change()
    
    @staticmethod
    def calculate_log_returns(prices: pd.Series) -> pd.Series:
        """Calculate log returns."""
        return np.log(prices / prices.shift(1))
    
    @staticmethod
    def calculate_cumulative_returns(returns: pd.Series) -> pd.Series:
        """Calculate cumulative returns."""
        return (1 + returns).cumprod() - 1
    
    @staticmethod
    def calculate_rolling_volatility(
        returns: pd.Series,
        window: int = 21
    ) -> pd.Series:
        """Calculate rolling annualized volatility."""
        return returns.rolling(window).std() * np.sqrt(252)


# Usage example
fin_data = FinancialData()

# Resample to weekly
weekly_data = fin_data.resample_ohlcv(data, 'W')

# Calculate returns
data['returns'] = fin_data.calculate_returns(data['close'])
data['log_returns'] = fin_data.calculate_log_returns(data['close'])
data['cumulative_returns'] = fin_data.calculate_cumulative_returns(data['returns'])
data['rolling_vol'] = fin_data.calculate_rolling_volatility(data['returns'])

print(data[['close', 'returns', 'cumulative_returns', 'rolling_vol']].tail())

Time Series Analysis

Financial time series have unique characteristics that require specialized analysis techniques.

Stationarity and Unit Roots

from scipy import stats

def adf_test(series: pd.Series) -> dict:
    """Augmented Dickey-Fuller test for stationarity."""
    from statsmodels.tsa.stattools import adfuller
    
    result = adfuller(series.dropna())
    
    return {
        'adf_statistic': result[0],
        'p_value': result[1],
        'critical_values': result[4],
        'stationary': result[1] < 0.05
    }

def kpss_test(series: pd.Series) -> dict:
    """KPSS test for stationarity."""
    from statsmodels.tsa.stattools import kpss
    
    result = kpss(series.dropna(), regression='c')
    
    return {
        'kpss_statistic': result[0],
        'p_value': result[1],
        'critical_values': result[3],
        'stationary': result[1] > 0.05
    }

# Test returns for stationarity
print("ADF Test on Returns:")
print(adf_test(data['returns']))

print("\nADF Test on Prices:")
print(adf_test(data['close']))

Moving Averages and Technical Indicators

def calculate_technical_indicators(prices: pd.Series) -> pd.DataFrame:
    """Calculate common technical indicators."""
    
    df = pd.DataFrame({'price': prices})
    
    # Simple Moving Averages
    df['sma_20'] = prices.rolling(20).mean()
    df['sma_50'] = prices.rolling(50).mean()
    df['sma_200'] = prices.rolling(200).mean()
    
    # Exponential Moving Averages
    df['ema_12'] = prices.ewm(span=12, adjust=False).mean()
    df['ema_26'] = prices.ewm(span=26, adjust=False).mean()
    
    # MACD
    df['macd'] = df['ema_12'] - df['ema_26']
    df['signal_line'] = df['macd'].ewm(span=9, adjust=False).mean()
    df['macd_histogram'] = df['macd'] - df['signal_line']
    
    # RSI (Relative Strength Index)
    delta = prices.diff()
    gain = (delta.where(delta > 0, 0)).rolling(14).mean()
    loss = (-delta.where(delta < 0, 0)).rolling(14).mean()
    rs = gain / loss
    df['rsi'] = 100 - (100 / (1 + rs))
    
    # Bollinger Bands
    df['bb_middle'] = prices.rolling(20).mean()
    bb_std = prices.rolling(20).std()
    df['bb_upper'] = df['bb_middle'] + (bb_std * 2)
    df['bb_lower'] = df['bb_middle'] - (bb_std * 2)
    
    # Average True Range (ATR)
    high = prices * 1.01  # Simulated for example
    low = prices * 0.99
    tr1 = high - low
    tr2 = abs(high - prices.shift(1))
    tr3 = abs(low - prices.shift(1))
    tr = pd.concat([tr1, tr2, tr3], axis=1).max(axis=1)
    df['atr'] = tr.rolling(14).mean()
    
    return df

indicators = calculate_technical_indicators(data['close'])
print(indicators.tail(10))

Time Series Decomposition

from statsmodels.tsa.seasonal import seasonal_decompose

def decompose_timeseries(
    series: pd.Series,
    period: int = 21
) -> pd.DataFrame:
    """Decompose time series into trend, seasonal, and residual components."""
    
    decomposition = seasonal_decompose(
        series.dropna(),
        model='multiplicative',
        period=period
    )
    
    return pd.DataFrame({
        'observed': series,
        'trend': decomposition.trend,
        'seasonal': decomposition.seasonal,
        'residual': decomposition.resid
    })

# Decompose returns
decomposition = decompose_timeseries(data['close'])
print(decomposition.tail())

Risk Metrics

Understanding and measuring risk is fundamental to quantitative finance.

Value at Risk (VaR)

def calculate_var(
    returns: pd.Series,
    confidence_level: float = 0.95,
    method: str = 'historical'
) -> float:
    """
    Calculate Value at Risk using various methods.
    
    Parameters:
    - returns: Series of returns
    - confidence_level: VaR confidence level (e.g., 0.95 for 95%)
    - method: 'historical', 'parametric', or 'monte_carlo'
    """
    
    if method == 'historical':
        # Historical VaR
        return -np.percentile(returns, (1 - confidence_level) * 100)
    
    elif method == 'parametric':
        # Parametric (Gaussian) VaR
        mu = returns.mean()
        sigma = returns.std()
        z = stats.norm.ppf(1 - confidence_level)
        return -(mu + z * sigma)
    
    elif method == 'monte_carlo':
        # Monte Carlo VaR
        mu = returns.mean()
        sigma = returns.std()
        
        simulations = np.random.normal(mu, sigma, 10000)
        return -np.percentile(simulations, (1 - confidence_level) * 100)
    
    else:
        raise ValueError(f"Unknown method: {method}")

def calculate_cvar(
    returns: pd.Series,
    confidence_level: float = 0.95
) -> float:
    """Calculate Conditional Value at Risk (Expected Shortfall)."""
    
    var = calculate_var(returns, confidence_level, 'historical')
    return -returns[returns <= -var].mean()


# Calculate risk metrics for the sample data
returns = data['returns'].dropna()

print("Risk Metrics:")
print(f"  VaR (95%):        {calculate_var(returns, 0.95):.4f}")
print(f"  VaR (99%):        {calculate_var(returns, 0.99):.4f}")
print(f"  CVaR (95%):       {calculate_cvar(returns, 0.95):.4f}")
print(f"  Max Drawdown:     {calculate_max_drawdown(data['close']):.4f}")
print(f"  Volatility (ann): {returns.std() * np.sqrt(252):.4f}")

Maximum Drawdown

def calculate_max_drawdown(prices: pd.Series) -> float:
    """Calculate maximum drawdown percentage."""
    
    # Calculate running maximum
    running_max = prices.expanding().max()
    
    # Calculate drawdown
    drawdown = (prices - running_max) / running_max
    
    # Return maximum (most negative) drawdown
    return drawdown.min()

def calculate_drawdown_series(prices: pd.Series) -> pd.Series:
    """Calculate drawdown time series."""
    
    running_max = prices.expanding().max()
    drawdown = (prices - running_max) / running_max
    
    return drawdown

def get_drawdown_periods(prices: pd.Series) -> list:
    """Identify drawdown periods."""
    
    drawdown = calculate_drawdown_series(prices)
    
    # Find periods where we're in drawdown
    in_drawdown = drawdown < 0
    
    # Identify periods
    periods = []
    start = None
    
    for i, (date, is_dd) in enumerate(in_drawdown.items()):
        if is_dd and start is None:
            start = date
        elif not is_dd and start is not None:
            periods.append({
                'start': start,
                'end': date,
                'depth': drawdown.loc[start:date].min(),
                'duration': (date - start).days
            })
            start = None
    
    return periods

# Calculate drawdown metrics
data['drawdown'] = calculate_drawdown_series(data['close'])
print(f"Current Drawdown: {data['drawdown'].iloc[-1]:.4f}")
print(f"Maximum Drawdown: {data['drawdown'].min():.4f}")

Sharpe and Sortino Ratios

def calculate_sharpe_ratio(
    returns: pd.Series,
    risk_free_rate: float = 0.0,
    periods_per_year: int = 252
) -> float:
    """Calculate Sharpe Ratio."""
    
    excess_returns = returns - risk_free_rate / periods_per_year
    return np.sqrt(periods_per_year) * excess_returns.mean() / excess_returns.std()


def calculate_sortino_ratio(
    returns: pd.Series,
    risk_free_rate: float = 0.0,
    periods_per_year: int = 252,
    target_return: float = 0.0
) -> float:
    """Calculate Sortino Ratio (uses downside deviation)."""
    
    excess_returns = returns - risk_free_rate / periods_per_year
    
    # Downside returns (returns below target)
    downside_returns = excess_returns[excess_returns < target_return]
    
    if len(downside_returns) == 0:
        return np.inf
    
    downside_std = downside_returns.std()
    
    return np.sqrt(periods_per_year) * excess_returns.mean() / downside_std


def calculate_information_ratio(
    returns: pd.Series,
    benchmark_returns: pd.Series
) -> float:
    """Calculate Information Ratio."""
    
    active_returns = returns - benchmark_returns
    tracking_error = active_returns.std() * np.sqrt(252)
    
    return (active_returns.mean() * 252) / tracking_error


# Calculate performance metrics
print("Performance Metrics:")
print(f"  Total Return:     {data['cumulative_returns'].iloc[-1]:.4f}")
print(f"  Annual Return:    {returns.mean() * 252:.4f}")
print(f"  Volatility:       {returns.std() * np.sqrt(252):.4f}")
print(f"  Sharpe Ratio:     {calculate_sharpe_ratio(returns):.4f}")
print(f"  Sortino Ratio:    {calculate_sortino_ratio(returns):.4f}")

Portfolio Optimization

Modern portfolio theory uses mathematical optimization to construct portfolios that maximize return for a given level of risk.

Mean-Variance Optimization

def calculate_portfolio_metrics(
    weights: np.ndarray,
    returns_df: pd.DataFrame
) -> dict:
    """Calculate portfolio return and volatility."""
    
    # Portfolio return
    portfolio_return = np.sum(returns_df.mean() * weights) * 252
    
    # Portfolio volatility
    portfolio_std = np.sqrt(
        np.dot(weights.T, np.dot(returns_df.cov() * 252, weights))
    )
    
    return {
        'return': portfolio_return,
        'volatility': portfolio_std,
        'sharpe': portfolio_return / portfolio_std
    }


def optimize_portfolio(
    returns_df: pd.DataFrame,
    target_return: float = None
) -> dict:
    """
    Optimize portfolio weights using Modern Portfolio Theory.
    
    Returns optimal weights that maximize Sharpe ratio.
    """
    from scipy.optimize import minimize
    
    n_assets = len(returns_df.columns)
    
    def neg_sharpe_ratio(weights):
        metrics = calculate_portfolio_metrics(weights, returns_df)
        return -metrics['sharpe']
    
    # Constraints
    constraints = [{'type': 'eq', 'fun': lambda x: np.sum(x) - 1}]  # Weights sum to 1
    
    if target_return:
        constraints.append({
            'type': 'eq',
            'fun': lambda x: calculate_portfolio_metrics(x, returns_df)['return'] - target_return
        })
    
    # Bounds: each weight between 0 and 1 (no shorting)
    bounds = tuple((0, 1) for _ in range(n_assets))
    
    # Initial guess: equal weights
    initial_weights = np.array([1/n_assets] * n_assets)
    
    # Optimize
    result = minimize(
        neg_sharpe_ratio,
        initial_weights,
        method='SLSQP',
        bounds=bounds,
        constraints=constraints
    )
    
    optimal_weights = result.x
    metrics = calculate_portfolio_metrics(optimal_weights, returns_df)
    
    return {
        'weights': dict(zip(returns_df.columns, optimal_weights)),
        **metrics
    }


def efficient_frontier(
    returns_df: pd.DataFrame,
    n_points: int = 50
) -> pd.DataFrame:
    """Generate efficient frontier."""
    
    # Get return range
    mean_returns = returns_df.mean() * 252
    min_return = mean_returns.min()
    max_return = mean_returns.max()
    
    target_returns = np.linspace(min_return, max_return, n_points)
    
    results = []
    for target in target_returns:
        try:
            result = optimize_portfolio(returns_df, target)
            results.append({
                'target_return': target,
                'portfolio_return': result['return'],
                'volatility': result['volatility'],
                'sharpe': result['sharpe']
            })
        except:
            continue
    
    return pd.DataFrame(results)


# Example portfolio optimization
# Generate sample asset returns
np.random.seed(42)
assets = ['AAPL', 'GOOGL', 'MSFT', 'AMZN', 'TSLA']
n_days = 252

asset_returns = pd.DataFrame({
    asset: np.random.normal(0.001, 0.02, n_days)
    for asset in assets
}, index=pd.date_range('2024-01-01', periods=n_days, freq='D'))

# Optimize portfolio
optimal = optimize_portfolio(asset_returns)

print("Optimal Portfolio:")
print(f"  Return: {optimal['return']:.4f}")
print(f"  Volatility: {optimal['volatility']:.4f}")
print(f"  Sharpe Ratio: {optimal['sharpe']:.4f}")
print("\nWeights:")
for asset, weight in optimal['weights'].items():
    if weight > 0.01:
        print(f"  {asset}: {weight:.2%}")

Risk Parity Portfolio

def calculate_risk_contributions(weights: np.ndarray, cov_matrix: np.ndarray) -> np.ndarray:
    """Calculate risk contribution of each asset."""
    
    portfolio_vol = np.sqrt(np.dot(weights.T, np.dot(cov_matrix, weights)))
    marginal_risk = np.dot(cov_matrix, weights)
    risk_contrib = weights * marginal_risk / portfolio_vol
    
    return risk_contrib


def optimize_risk_parity(cov_matrix: np.ndarray) -> np.ndarray:
    """Optimize for risk parity (equal risk contribution)."""
    from scipy.optimize import minimize
    
    n_assets = len(cov_matrix)
    
    def risk_parity objective(weights):
        risk_contrib = calculate_risk_contributions(weights, cov_matrix * 252)
        target_risk = risk_contrib.sum() / n_assets
        
        # Sum of squared differences from target risk
        return np.sum((risk_contrib - target_risk) ** 2)
    
    # Constraints: weights sum to 1
    constraints = [{'type': 'eq', 'fun': lambda x: np.sum(x) - 1}]
    bounds = tuple((0, 1) for _ in range(n_assets))
    
    result = minimize(
        risk_parity_objective,
        np.array([1/n_assets] * n_assets),
        method='SLSQP',
        bounds=bounds,
        constraints=constraints
    )
    
    return result.x

Backtesting Trading Strategies

Backtesting applies trading strategies to historical data to evaluate performance.

Simple Backtest Framework

class Backtester:
    def __init__(self, initial_capital: float = 100000):
        self.initial_capital = initial_capital
        self.positions = []
        self.trades = []
    
    def run_strategy(
        self,
        prices: pd.Series,
        signals: pd.Series,
        transaction_cost: float = 0.001
    ) -> pd.DataFrame:
        """Run backtest with given signals."""
        
        # Initialize
        capital = self.initial_capital
        position = 0  # shares
        cash = capital
        
        results = []
        
        for date, price in prices.items():
            signal = signals.loc[date] if date in signals.index else 0
            
            # Trading logic
            if signal == 1 and position == 0:  # Buy signal
                shares_to_buy = int(cash / (price * (1 + transaction_cost)))
                if shares_to_buy > 0:
                    cost = shares_to_buy * price * (1 + transaction_cost)
                    cash -= cost
                    position = shares_to_buy
                    self.trades.append({
                        'date': date,
                        'action': 'BUY',
                        'shares': shares_to_buy,
                        'price': price,
                        'cost': cost
                    })
            
            elif signal == -1 and position > 0:  # Sell signal
                proceeds = position * price * (1 - transaction_cost)
                cash += proceeds
                self.trades.append({
                    'date': date,
                    'action': 'SELL',
                    'shares': position,
                    'price': price,
                    'proceeds': proceeds
                })
                position = 0
            
            # Calculate portfolio value
            portfolio_value = cash + position * price
            
            results.append({
                'date': date,
                'price': price,
                'position': position,
                'cash': cash,
                'portfolio_value': portfolio_value,
                'signal': signal
            })
        
        return pd.DataFrame(results).set_index('date')
    
    def calculate_metrics(self, results: pd.DataFrame) -> dict:
        """Calculate backtest performance metrics."""
        
        returns = results['portfolio_value'].pct_change().dropna()
        
        return {
            'total_return': (results['portfolio_value'].iloc[-1] / self.initial_capital) - 1,
            'annual_return': returns.mean() * 252,
            'annual_volatility': returns.std() * np.sqrt(252),
            'sharpe_ratio': calculate_sharpe_ratio(returns),
            'max_drawdown': calculate_max_drawdown(results['portfolio_value']),
            'num_trades': len(self.trades)
        }


# Example: Moving Average Crossover Strategy
def moving_average_crossover(prices: pd.Series, short_window: int = 20, long_window: int = 50) -> pd.Series:
    """Generate trading signals for MA crossover strategy."""
    
    short_ma = prices.rolling(short_window).mean()
    long_ma = prices.rolling(long_window).mean()
    
    signals = pd.Series(0, index=prices.index)
    signals[short_ma > long_ma] = 1
    signals[short_ma < long_ma] = -1
    
    return signals

# Run backtest
signals = moving_average_crossover(data['close'])
backtester = Backtester(initial_capital=100000)
results = backtester.run_strategy(data['close'], signals)

print("Backtest Results:")
metrics = backtester.calculate_metrics(results)
for key, value in metrics.items():
    print(f"  {key}: {value:.4f}")

Implementation Checklist

Data Management

  • Establish data sources (APIs, files, databases)
  • Implement data cleaning pipelines
  • Handle missing data appropriately
  • Create data validation checks
  • Build efficient data storage (HDF5, Parquet)

Analysis Infrastructure

  • Set up Jupyter environment
  • Configure Pandas and NumPy
  • Install specialized libraries (statsmodels, scipy)
  • Implement logging and error handling

Model Development

  • Document model assumptions
  • Implement backtesting framework
  • Validate model performance
  • Stress test models
  • Implement risk controls

Production Deployment

  • Create automated pipelines
  • Implement monitoring
  • Build alerting systems
  • Document code and processes

Summary

Quantitative finance with Python requires mastery of several key areas:

  1. Financial data structures: OHLCV, tick data, and proper handling of time series.

  2. Time series analysis: Stationarity testing, decomposition, and technical indicators.

  3. Risk metrics: VaR, CVaR, drawdowns, and performance ratios (Sharpe, Sortino).

  4. Portfolio optimization: Mean-variance optimization, efficient frontiers, and risk parity.

  5. Backtesting: Systematic strategy evaluation with proper performance metrics.

The libraries ecosystemโ€”Pandas, NumPy, SciPy, statsmodelsโ€”provides powerful tools for all these tasks. Combined with proper risk management and validation, they form the foundation of quantitative analysis.


External Resources

Comments