Introduction
TimescaleDB powers production applications across diverse industries, from monitoring millions of IoT sensors to tracking financial transactions in real-time. Understanding real-world use cases helps you envision how TimescaleDB can solve your specific challenges. In this article, we explore detailed production use cases with schema designs, query patterns, and implementation strategies that have proven successful in the field.
IoT Monitoring and Analytics
Internet of Things (IoT) applications represent one of the largest markets for time-series databases. TimescaleDB’s combination of high write throughput and SQL query flexibility makes it ideal for IoT platforms.
Smart Building Management
Smart buildings contain thousands of sensors measuring temperature, humidity, occupancy, light, and energy consumption:
-- Schema for smart building monitoring
CREATE TABLE building_sensors (
time TIMESTAMPTZ NOT NULL,
building_id UUID NOT NULL,
floor_id UUID NOT NULL,
zone_id UUID NOT NULL,
sensor_id UUID NOT NULL,
sensor_type TEXT NOT NULL, -- 'temperature', 'humidity', 'occupancy', 'light'
value DOUBLE PRECISION NOT NULL,
unit TEXT NOT NULL
);
SELECT create_hypertable('building_sensors', 'time',
chunk_time_interval => INTERVAL '1 hour',
if_not_exists => TRUE);
-- Segment by building for efficient queries
ALTER TABLE building_sensors SET (
timescaledb.compress,
timescaledb.compress_segmentby = 'building_id, sensor_type'
);
-- Composite index for common queries
CREATE INDEX idx_building_sensors_lookup
ON building_sensors (building_id, floor_id, time DESC);
-- Query: Average temperature by floor
SELECT
f.floor_name,
AVG(s.value) AS avg_temp
FROM building_sensors s
JOIN floors f ON s.floor_id = f.floor_id
WHERE s.time > NOW() - INTERVAL '24 hours'
AND s.sensor_type = 'temperature'
GROUP BY f.floor_name
ORDER BY avg_temp DESC;
-- Query: Occupancy trends over time
SELECT
time_bucket('15 minutes', time) AS bucket,
zone_id,
AVG(value) AS avg_occupancy
FROM building_sensors
WHERE time > NOW() - INTERVAL '7 days'
AND sensor_type = 'occupancy'
GROUP BY bucket, zone_id
ORDER BY bucket;
-- Automated data retention
SELECT add_retention_policy('building_sensors', INTERVAL '2 years');
Smart building systems typically aggregate data from multiple sources: HVAC systems, lighting controls, access systems, and environmental sensors. TimescaleDB’s JOIN capability allows correlating data across these systems.
Industrial IoT and Manufacturing
Manufacturing environments have demanding requirements: high-frequency data collection, complex event processing, and real-time alerting:
-- Industrial equipment monitoring
CREATE TABLE machine_metrics (
time TIMESTAMPTZ NOT NULL,
machine_id UUID NOT NULL,
machine_type TEXT NOT NULL, -- 'cnc', 'robot', 'conveyor', 'sensor'
metric_name TEXT NOT NULL, -- 'temperature', 'vibration', 'speed', 'power'
value DOUBLE PRECISION NOT NULL,
status TEXT, -- 'normal', 'warning', 'critical'
metadata JSONB
);
SELECT create_hypertable('machine_metrics', 'time',
chunk_time_interval => INTERVAL '1 hour');
-- High-frequency data often benefits from larger chunks
-- to reduce metadata overhead
-- Predictive maintenance: detect anomalies
CREATE MATERIALIZED VIEW machine_health
WITH (timescaledb.continuous) AS
SELECT
time_bucket('5 minutes', time) AS bucket,
machine_id,
metric_name,
AVG(value) AS avg_value,
MAX(value) AS max_value,
MIN(value) AS min_value,
STDDEV(value) AS std_value,
-- Anomaly indicator
CASE
WHEN MAX(value) > 1.5 * AVG(value) OVER w THEN 'warning'
ELSE 'normal'
END AS health_status
FROM machine_metrics
WHERE time > NOW() - INTERVAL '30 days'
WINDOW w AS (
PARTITION BY machine_id, metric_name
ORDER BY time
ROWS BETWEEN 12 PRECEDING AND CURRENT ROW
)
GROUP BY bucket, machine_id, metric_name;
-- Alert on critical conditions
SELECT add_continuous_aggregate_policy('machine_health',
start_offset => INTERVAL '10 minutes',
end_offset => INTERVAL '5 minutes',
schedule_interval => INTERVAL '5 minutes');
Manufacturing use cases often require integration with SCADA systems, PLCs, and industrial protocols. TimescaleDB handles the data ingestion layer while maintaining compatibility with existing operational technology infrastructure.
Energy Grid Monitoring
Utility companies monitor distributed energy resources, grid equipment, and consumption patterns:
-- Energy consumption tracking
CREATE TABLE energy_readings (
time TIMESTAMPTZ NOT NULL,
meter_id UUID NOT NULL,
meter_type TEXT NOT NULL, -- 'residential', 'commercial', 'industrial'
consumption_kwh DOUBLE PRECISION NOT NULL,
demand_kw DOUBLE PRECISION,
voltage DOUBLE PRECISION,
power_factor DOUBLE PRECISION,
location GEOGRAPHY(POINT)
);
SELECT create_hypertable('energy_readings', 'time',
chunk_time_interval => INTERVAL '1 day');
-- Spatial queries for grid analysis
SELECT
ST_AsGeoJSON(location) AS location,
SUM(consumption_kwh) AS total_consumption
FROM energy_readings
WHERE time > NOW() - INTERVAL '24 hours'
AND ST_DWithin(
location,
ST_MakePoint(-122.4194, 37.7749)::geography,
5000 -- 5km radius
)
GROUP BY location;
-- Peak demand analysis
SELECT
time_bucket('1 hour', time) AS hour,
meter_type,
SUM(demand_kw) AS peak_demand
FROM energy_readings
WHERE time > NOW() - INTERVAL '30 days'
GROUP BY hour, meter_type
ORDER BY hour;
Financial Applications
Financial services generate massive volumes of time-series data from market feeds, trading systems, and risk management platforms.
Stock Market Data Analysis
TimescaleDB handles tick data, aggregate statistics, and technical indicators:
-- Stock price tick data
CREATE TABLE stock_ticks (
time TIMESTAMPTZ NOT NULL,
symbol TEXT NOT NULL,
price DOUBLE PRECISION NOT NULL,
volume BIGINT NOT NULL,
bid DOUBLE PRECISION,
ask DOUBLE PRECISION,
exchange TEXT
);
SELECT create_hypertable('stock_ticks', 'time',
chunk_time_interval => INTERVAL '1 day');
-- Efficient storage with compression
ALTER TABLE stock_ticks SET (
timescaledb.compress,
timescaledb.compress_segmentby = 'symbol'
);
-- OHLC (Open-High-Low-Close) aggregation
CREATE MATERIALIZED VIEW stock_ohlc
WITH (timescaledb.continuous) AS
SELECT
time_bucket('1 minute', time) AS bucket,
symbol,
FIRST(price, time) AS open,
MAX(price) AS high,
MIN(price) AS low,
LAST(price, time) AS close,
SUM(volume) AS volume
FROM stock_ticks
WHERE time > NOW() - INTERVAL '365 days'
GROUP BY bucket, symbol;
-- Technical indicators
CREATE MATERIALIZED VIEW stock_indicators
WITH (timescaledb.continuous) AS
SELECT
bucket,
symbol,
close,
-- Simple Moving Average
AVG(close) OVER (
PARTITION BY symbol
ORDER BY bucket
ROWS BETWEEN 19 PRECEDING AND CURRENT ROW
) AS sma_20,
-- Exponential Moving Average
AVG(close) OVER (
PARTITION BY symbol
ORDER BY bucket
ROWS BETWEEN 11 PRECEDING AND CURRENT ROW
) AS ema_12,
-- Relative Strength Index approximation
CASE
WHEN AVG(close) OVER w > AVG(close) OVER w - AVG(close - LAG(close, 1) OVER w) OVER w
THEN 100 - (100 / (1 +
AVG(close - LAG(close, 1) OVER w) FILTER (WHERE close > LAG(close, 1) OVER w) OVER w /
AVG(ABS(close - LAG(close, 1) OVER w)) OVER w
))
ELSE 50
END AS rsi_14
FROM stock_ohlc
WINDOW w AS (
PARTITION BY symbol
ORDER BY bucket
ROWS BETWEEN 13 PRECEDING AND CURRENT ROW
);
Cryptocurrency Analytics
Cryptocurrency platforms face extreme data volumes with 24/7 trading:
-- Cryptocurrency market data
CREATE TABLE crypto_ticks (
time TIMESTAMPTZ NOT NULL,
exchange TEXT NOT NULL,
pair TEXT NOT NULL, -- 'BTC/USD', 'ETH/USD'
price DOUBLE PRECISION NOT NULL,
volume_24h DOUBLE PRECISION,
bid DOUBLE PRECISION,
ask DOUBLE PRECISION
);
SELECT create_hypertable('crypto_ticks', 'time',
chunk_time_interval => INTERVAL '1 hour',
if_not_exists => TRUE);
-- Cross-exchange price analysis
SELECT
time_bucket('1 minute', ct1.time) AS bucket,
ct1.pair,
ct1.exchange AS exchange_1,
ct2.exchange AS exchange_2,
ct1.price AS price_1,
ct2.price AS price_2,
ABS(ct1.price - ct2.price) AS price_diff,
ABS(ct1.price - ct2.price) / ct1.price * 100 AS price_diff_pct
FROM crypto_ticks ct1
JOIN crypto_ticks ct2 ON
ct1.time = ct2.time AND
ct1.pair = ct2.pair AND
ct1.exchange < ct2.exchange
WHERE ct1.time > NOW() - INTERVAL '1 hour'
AND ct1.pair = 'BTC/USD'
ORDER BY price_diff_pct DESC
LIMIT 10;
DevOps and Observability
TimescaleDB has become a popular choice for application monitoring and observability platforms.
Application Metrics
Track application performance metrics at scale:
-- Application metrics with labels
CREATE TABLE app_metrics (
time TIMESTAMPTZ NOT NULL,
service TEXT NOT NULL,
environment TEXT NOT NULL, -- 'production', 'staging', 'development'
host TEXT,
metric_name TEXT NOT NULL, -- 'cpu', 'memory', 'requests', 'latency'
value DOUBLE PRECISION NOT NULL,
labels JSONB DEFAULT '{}'
);
SELECT create_hypertable('app_metrics', 'time',
chunk_time_interval => INTERVAL '1 hour');
CREATE INDEX idx_app_metrics_labels
ON app_metrics USING GIN (labels);
-- Common query: service health dashboard
SELECT
time_bucket('1 minute', time) AS bucket,
service,
environment,
AVG(value) FILTER (WHERE metric_name = 'cpu') AS avg_cpu,
AVG(value) FILTER (WHERE metric_name = 'memory') AS avg_memory,
AVG(value) FILTER (WHERE metric_name = 'latency') AS avg_latency,
COUNT(*) FILTER (WHERE metric_name = 'requests') AS request_count
FROM app_metrics
WHERE time > NOW() - INTERVAL '1 hour'
GROUP BY bucket, service, environment
ORDER BY bucket;
-- Alerting: high latency detection
SELECT
service,
time_bucket('1 minute', time) AS minute,
PERCENTILE_CONT(0.99) WITHIN GROUP (ORDER BY value) AS p99_latency
FROM app_metrics
WHERE time > NOW() - INTERVAL '5 minutes'
AND metric_name = 'latency'
AND service = 'api-gateway'
GROUP BY service, minute
HAVING PERCENTILE_CONT(0.99) WITHIN GROUP (ORDER BY value) > 1.0;
Log Analytics
TimescaleDB efficiently stores and queries application logs:
-- Structured logging
CREATE TABLE application_logs (
time TIMESTAMPTZ NOT NULL,
service TEXT NOT NULL,
level TEXT NOT NULL, -- 'DEBUG', 'INFO', 'WARN', 'ERROR'
message TEXT NOT NULL,
trace_id UUID,
user_id UUID,
metadata JSONB
);
SELECT create_hypertable('application_logs', 'time',
chunk_time_interval => INTERVAL '1 day');
-- Full-text search using PostgreSQL text search
ALTER TABLE application_logs
ADD COLUMN message_tsv tsvector
GENERATED ALWAYS AS (to_tsvector('english', message)) STORED;
CREATE INDEX idx_logs_search
ON application_logs USING GIN (message_tsv);
-- Error rate tracking
SELECT
time_bucket('5 minutes', time) AS bucket,
service,
COUNT(*) FILTER (WHERE level = 'ERROR') AS error_count,
COUNT(*) FILTER (WHERE level IN ('WARN', 'ERROR')) AS warning_count,
COUNT(*) AS total_count,
COUNT(*) FILTER (WHERE level = 'ERROR')::numeric /
NULLIF(COUNT(*), 0) * 100 AS error_rate_pct
FROM application_logs
WHERE time > NOW() - INTERVAL '1 hour'
GROUP BY bucket, service
HAVING COUNT(*) FILTER (WHERE level = 'ERROR') > 0
ORDER BY error_count DESC;
-- Trace analysis
SELECT
trace_id,
service,
COUNT(*) AS event_count,
MIN(time) AS trace_start,
MAX(time) AS trace_end,
MAX(time) - MIN(time) AS trace_duration
FROM application_logs
WHERE trace_id IS NOT NULL
AND time > NOW() - INTERVAL '30 minutes'
GROUP BY trace_id, service
ORDER BY trace_duration DESC
LIMIT 20;
Infrastructure Monitoring
Monitor infrastructure components including servers, containers, and Kubernetes:
-- Container metrics
CREATE TABLE container_metrics (
time TIMESTAMPTZ NOT NULL,
cluster TEXT NOT NULL,
namespace TEXT NOT NULL,
pod TEXT NOT NULL,
container TEXT NOT NULL,
metric_name TEXT NOT NULL,
value DOUBLE PRECISION NOT NULL
);
SELECT create_hypertable('container_metrics', 'time',
chunk_time_interval => INTERVAL '1 hour');
-- Kubernetes resource analysis
SELECT
time_bucket('5 minutes', time) AS bucket,
namespace,
pod,
AVG(value) FILTER (WHERE metric_name = 'cpu_usage') AS avg_cpu,
AVG(value) FILTER (WHERE metric_name = 'memory_usage') AS avg_memory
FROM container_metrics
WHERE time > NOW() - INTERVAL '24 hours'
AND cluster = 'production'
GROUP BY bucket, namespace, pod
ORDER BY avg_cpu DESC
LIMIT 20;
-- Resource quota analysis
CREATE MATERIALIZED VIEW resource_usage
WITH (timescaledb.continuous) AS
SELECT
time_bucket('1 hour', time) AS bucket,
namespace,
SUM(avg_cpu) AS total_cpu,
SUM(avg_memory) AS total_memory
FROM (
SELECT
time_bucket('5 minutes', time) AS time,
namespace,
pod,
AVG(value) FILTER (WHERE metric_name = 'cpu_usage') AS avg_cpu,
AVG(value) FILTER (WHERE metric_name = 'memory_usage') AS avg_memory
FROM container_metrics
WHERE time > NOW() - INTERVAL '7 days'
GROUP BY time, namespace, pod
) sub
GROUP BY bucket, namespace;
User Behavior Analytics
Understanding user behavior drives product decisions:
-- User activity tracking
CREATE TABLE user_events (
time TIMESTAMPTZ NOT NULL,
user_id UUID NOT NULL,
event_type TEXT NOT NULL, -- 'page_view', 'click', 'purchase', 'login'
page_url TEXT,
session_id UUID,
metadata JSONB DEFAULT '{}'
);
SELECT create_hypertable('user_events', 'time',
chunk_time_interval => INTERVAL '1 day');
-- Session analysis
WITH sessions AS (
SELECT
session_id,
user_id,
MIN(time) AS session_start,
MAX(time) AS session_end,
COUNT(*) AS event_count,
COUNT(*) FILTER (WHERE event_type = 'page_view') AS page_views,
COUNT(*) FILTER (WHERE event_type = 'purchase') AS purchases
FROM user_events
WHERE time > NOW() - INTERVAL '7 days'
GROUP BY session_id, user_id
)
SELECT
date_trunc('day', session_start) AS day,
COUNT(*) AS total_sessions,
AVG(session_end - session_start) AS avg_duration,
AVG(event_count) AS avg_events_per_session,
SUM(purchases) AS total_purchases,
SUM(purchases)::numeric / COUNT(*) AS conversion_rate
FROM sessions
GROUP BY day
ORDER BY day;
-- Funnel analysis
SELECT
step_1_users.user_id,
step_1.time AS step_1_time,
step_2.time AS step_2_time,
step_3.time AS step_3_time,
step_3.time - step_1.time AS time_to_complete
FROM (
SELECT user_id, MIN(time) AS time
FROM user_events
WHERE event_type = 'page_view'
AND page_url = '/pricing'
GROUP BY user_id
) step_1
LEFT JOIN (
SELECT user_id, MIN(time) AS time
FROM user_events
WHERE event_type = 'click'
AND metadata->>'button' = 'signup'
GROUP BY user_id
) step_2 ON step_1.user_id = step_2.user_id
LEFT JOIN (
SELECT user_id, MIN(time) AS time
FROM user_events
WHERE event_type = 'purchase'
GROUP BY user_id
) step_3 ON step_2.user_id = step_3.user_id
WHERE step_2.time IS NOT NULL;
Healthcare and Wearables
Healthcare applications require reliable time-series storage for patient monitoring:
-- Wearable device data
CREATE TABLE wearable_readings (
time TIMESTAMPTZ NOT NULL,
patient_id UUID NOT NULL,
device_id UUID NOT NULL,
heart_rate INTEGER,
steps INTEGER,
sleep_stage TEXT, -- 'awake', 'light', 'deep', 'rem'
blood_oxygen INTEGER,
body_temp DOUBLE PRECISION,
metadata JSONB
);
SELECT create_hypertable('wearable_readings', 'time',
chunk_time_interval => INTERVAL '1 day');
-- Patient health summary
SELECT
patient_id,
time_bucket('1 hour', time) AS bucket,
AVG(heart_rate) AS avg_hr,
MIN(heart_rate) AS min_hr,
MAX(heart_rate) AS max_hr,
AVG(steps) AS steps,
MODE() WITHIN GROUP (ORDER BY sleep_stage) AS dominant_sleep_stage
FROM wearable_readings
WHERE time > NOW() - INTERVAL '24 hours'
AND patient_id = 'patient-uuid-here'
GROUP BY patient_id, bucket
ORDER BY bucket;
-- Anomaly detection for patient monitoring
WITH baseline AS (
SELECT
patient_id,
AVG(heart_rate) AS avg_hr,
STDDEV(heart_rate) AS std_hr
FROM wearable_readings
WHERE time > NOW() - INTERVAL '30 days'
GROUP BY patient_id
),
current AS (
SELECT
patient_id,
time,
heart_rate
FROM wearable_readings
WHERE time > NOW() - INTERVAL '1 hour'
)
SELECT
c.patient_id,
c.time,
c.heart_rate,
b.avg_hr,
b.std_hr,
(c.heart_rate - b.avg_hr) / NULLIF(b.std_hr, 0) AS z_score
FROM current c
JOIN baseline b ON c.patient_id = b.patient_id
WHERE ABS((c.heart_rate - b.avg_hr) / NULLIF(b.std_hr, 0)) > 3;
Geospatial Time-Series
Combine time-series with geospatial data for location-aware applications:
-- Fleet tracking
CREATE TABLE vehicle_tracking (
time TIMESTAMPTZ NOT NULL,
vehicle_id UUID NOT NULL,
vehicle_type TEXT NOT NULL, -- 'truck', 'car', 'van'
location GEOGRAPHY(POINT) NOT NULL,
speed DOUBLE PRECISION,
heading DOUBLE PRECISION,
altitude DOUBLE PRECISION,
metadata JSONB
);
SELECT create_hypertable('vehicle_tracking', 'time',
chunk_time_interval => INTERVAL '1 hour');
-- Fleet utilization analysis
SELECT
vehicle_id,
MIN(time) AS first_tracked,
MAX(time) AS last_tracked,
COUNT(*) AS reading_count,
AVG(speed) AS avg_speed,
MAX(speed) AS max_speed,
-- Total distance traveled
SUM(
ST_Distance(
location,
LAG(location, 1) OVER (
PARTITION BY vehicle_id
ORDER BY time
)
)
) AS total_distance_m
FROM vehicle_tracking
WHERE time > NOW() - INTERVAL '24 hours'
GROUP BY vehicle_id
ORDER BY total_distance_m DESC;
-- Geofencing alerts
WITH vehicle_positions AS (
SELECT
time,
vehicle_id,
location,
ST_MakePoint(-122.4194, 37.7749)::geography AS base_location
FROM vehicle_tracking
WHERE time > NOW() - INTERVAL '1 hour'
)
SELECT
time,
vehicle_id,
ST_Distance(location, base_location) AS distance_m,
CASE
WHEN ST_DWithin(location, base_location, 1000) THEN 'approaching'
WHEN ST_DWithin(location, base_location, 5000) THEN 'nearby'
ELSE 'far'
END AS proximity_status
FROM vehicle_positions
WHERE NOT ST_DWithin(location, base_location, 10000)
ORDER BY time;
Best Practices Summary
From these use cases, several best practices emerge:
- Choose appropriate chunk intervals based on data volume and query patterns
- Use segmentby columns for compression and chunk exclusion
- Leverage continuous aggregates for pre-computed rollups
- Implement retention policies to manage data lifecycle
- Use JSONB for flexible metadata without schema changes
- Consider geo types for location-aware applications
- Plan for scale by monitoring chunk counts and sizes
Conclusion
TimescaleDB powers production applications across virtually every industry that generates time-series data. From monitoring millions of IoT sensors to tracking financial markets, from observability platforms to healthcare wearables, TimescaleDB’s combination of PostgreSQL compatibility and time-series optimization enables diverse applications.
The use cases demonstrated here show common patterns: high-volume data ingestion, time-bucketed aggregations, continuous aggregates for pre-computed rollups, and sophisticated queries that combine time-series with relational or geospatial data. These patterns form the foundation for building robust time-series applications.
With this article, we’ve completed the TimescaleDB tutorial series covering basics, operations, internals, trends, AI applications, and production use cases. You now have comprehensive knowledge to design, implement, and operate TimescaleDB in your own applications.
Resources
- TimescaleDB Use Cases
- TimescaleDB Case Studies
- IoT with TimescaleDB
- Financial Time-Series
- DevOps Monitoring
Comments