Skip to main content
โšก Calmops

Database Integration in Rust Web Services

SQLx, Diesel, and ORM Patterns for Type-Safe Database Access

Introduction

Database access is fundamental to web services, yet it’s often a source of bugs, security vulnerabilities, and performance issues. Common problems include:

  • SQL Injection - User input directly concatenated into queries
  • Runtime Type Mismatches - Columns expected as integers but returned as strings
  • Connection Pool Exhaustion - Threads waiting for database connections
  • N+1 Query Problems - Fetching data inefficiently in loops
  • Migration Chaos - Schema changes out of sync with code

Rust’s type system makes database access fundamentally safer. Unlike dynamically-typed languages where type errors surface at runtime, Rust catches database issues at compile time.

This article explores how to build robust, type-safe database layers using SQLx and Diesel, two different philosophies for Rust database access.


Part 1: Core Concepts

Database Access Strategies in Rust

Compile-Time Safety: Rust can verify at build time that:

  • Your queries are syntactically valid SQL
  • Column names and types match your code
  • Null handling is explicit

Runtime Safety: Dynamic verification when:

  • Connecting to databases
  • Parsing query results
  • Handling concurrent access

SQLx: Query-First, Compile-Time Verification

SQLx verifies SQL queries against a real database at compile time.

// This query is checked against PostgreSQL during compilation
let user = sqlx::query_as::<_, User>(
    "SELECT id, email, name FROM users WHERE id = ?"
)
    .bind(user_id)
    .fetch_one(&pool)
    .await?;

If the table doesn’t exist or columns are wrong, the code won’t compile.

Diesel: ORM, Type-Safe Query Builder

Diesel provides a type-safe query builder. Queries are built with Rust types, not SQL strings.

// Type-safe, no SQL strings needed
let user = users::table
    .find(user_id)
    .first::<User>(&mut conn)?;

If a column is removed, the code won’t compile.


Part 2: SQLx Setup and Usage

Installation and Configuration

[dependencies]
sqlx = { version = "0.7", features = ["runtime-tokio-rustls", "postgres"] }
tokio = { version = "1", features = ["full"] }
serde = { version = "1", features = ["derive"] }
chrono = { version = "0.4", features = ["serde"] }
uuid = { version = "1", features = ["v4", "serde"] }

[sqlx]
database-url = "postgres://user:password@localhost/dbname"

Connection Pool Management

use sqlx::postgres::PgPoolOptions;

#[derive(Clone)]
pub struct AppState {
    pub db: sqlx::PgPool,
}

async fn setup_database() -> Result<sqlx::PgPool, sqlx::Error> {
    // Create connection pool
    let pool = PgPoolOptions::new()
        .max_connections(5)           // Maximum concurrent connections
        .min_connections(2)           // Minimum to keep alive
        .connect_timeout(std::time::Duration::from_secs(30))
        .connect("postgres://user:pass@localhost/mydb")
        .await?;

    // Run migrations
    sqlx::migrate!("./migrations")
        .run(&pool)
        .await?;

    Ok(pool)
}

#[tokio::main]
async fn main() -> Result<()> {
    let db = setup_database().await?;
    
    let app = create_app(AppState { db });
    
    let listener = tokio::net::TcpListener::bind("0.0.0.0:3000").await?;
    axum::serve(listener, app).await?;

    Ok(())
}

Defining Models

use chrono::{DateTime, Utc};
use serde::{Deserialize, Serialize};
use sqlx::FromRow;
use uuid::Uuid;

// FromRow derives allow automatic deserialization from database rows
#[derive(Debug, Clone, Serialize, Deserialize, FromRow)]
pub struct User {
    pub id: Uuid,
    pub email: String,
    pub name: String,
    pub created_at: DateTime<Utc>,
    pub updated_at: DateTime<Utc>,
}

#[derive(Debug, Clone, Serialize, Deserialize, FromRow)]
pub struct Post {
    pub id: Uuid,
    pub user_id: Uuid,
    pub title: String,
    pub content: String,
    pub created_at: DateTime<Utc>,
}

Queries with SQLx

use sqlx::PgPool;

pub struct UserRepository;

impl UserRepository {
    // Single record
    pub async fn find_by_id(pool: &PgPool, id: Uuid) -> Result<User, sqlx::Error> {
        sqlx::query_as::<_, User>(
            "SELECT id, email, name, created_at, updated_at FROM users WHERE id = $1"
        )
            .bind(id)
            .fetch_one(pool)
            .await
    }

    // Multiple records
    pub async fn find_all(pool: &PgPool) -> Result<Vec<User>, sqlx::Error> {
        sqlx::query_as::<_, User>(
            "SELECT id, email, name, created_at, updated_at FROM users ORDER BY created_at DESC"
        )
            .fetch_all(pool)
            .await
    }

    // With filtering
    pub async fn search(pool: &PgPool, query: &str) -> Result<Vec<User>, sqlx::Error> {
        let search_term = format!("%{}%", query);
        sqlx::query_as::<_, User>(
            "SELECT id, email, name, created_at, updated_at FROM users 
             WHERE email ILIKE $1 OR name ILIKE $1 
             ORDER BY created_at DESC"
        )
            .bind(search_term)
            .fetch_all(pool)
            .await
    }

    // Insert
    pub async fn create(
        pool: &PgPool,
        email: &str,
        name: &str,
    ) -> Result<User, sqlx::Error> {
        let id = Uuid::new_v4();
        let now = Utc::now();

        sqlx::query_as::<_, User>(
            "INSERT INTO users (id, email, name, created_at, updated_at) 
             VALUES ($1, $2, $3, $4, $5)
             RETURNING id, email, name, created_at, updated_at"
        )
            .bind(id)
            .bind(email)
            .bind(name)
            .bind(now)
            .bind(now)
            .fetch_one(pool)
            .await
    }

    // Update
    pub async fn update(
        pool: &PgPool,
        id: Uuid,
        name: &str,
    ) -> Result<User, sqlx::Error> {
        let now = Utc::now();

        sqlx::query_as::<_, User>(
            "UPDATE users SET name = $1, updated_at = $2 WHERE id = $3
             RETURNING id, email, name, created_at, updated_at"
        )
            .bind(name)
            .bind(now)
            .bind(id)
            .fetch_one(pool)
            .await
    }

    // Delete
    pub async fn delete(pool: &PgPool, id: Uuid) -> Result<(), sqlx::Error> {
        sqlx::query("DELETE FROM users WHERE id = $1")
            .bind(id)
            .execute(pool)
            .await?;

        Ok(())
    }
}

Transactions

pub async fn transfer_funds(
    pool: &PgPool,
    from_user_id: Uuid,
    to_user_id: Uuid,
    amount: i64,
) -> Result<(), sqlx::Error> {
    let mut tx = pool.begin().await?;

    // Deduct from sender
    sqlx::query("UPDATE users SET balance = balance - $1 WHERE id = $2")
        .bind(amount)
        .bind(from_user_id)
        .execute(&mut *tx)
        .await?;

    // Add to receiver
    sqlx::query("UPDATE users SET balance = balance + $1 WHERE id = $2")
        .bind(amount)
        .bind(to_user_id)
        .execute(&mut *tx)
        .await?;

    // Log transaction
    sqlx::query("INSERT INTO transactions (from_id, to_id, amount, created_at) VALUES ($1, $2, $3, $4)")
        .bind(from_user_id)
        .bind(to_user_id)
        .bind(amount)
        .bind(Utc::now())
        .execute(&mut *tx)
        .await?;

    // Both succeed or both fail
    tx.commit().await?;

    Ok(())
}

Part 3: Diesel Setup and Usage

Installation and Configuration

[dependencies]
diesel = { version = "2", features = ["postgres", "chrono", "uuid"] }
diesel-async = { version = "0.4", features = ["postgres", "deadpool"] }
tokio = { version = "1", features = ["full"] }
uuid = { version = "1", features = ["v4", "serde"] }

[dev-dependencies]
diesel_cli = { version = "2", features = ["postgres"] }

Schema Definition

// src/schema.rs - Auto-generated by diesel migration tool
table! {
    users (id) {
        id -> Uuid,
        email -> Varchar,
        name -> Varchar,
        created_at -> Timestamp,
        updated_at -> Timestamp,
    }
}

table! {
    posts (id) {
        id -> Uuid,
        user_id -> Uuid,
        title -> Varchar,
        content -> Text,
        created_at -> Timestamp,
    }
}

allow_tables_to_appear_in_same_query!(users, posts);

Models and Insertable

use diesel::prelude::*;
use uuid::Uuid;
use chrono::{DateTime, Utc};

#[derive(Queryable, Selectable, Clone, Debug)]
#[diesel(table_name = users)]
pub struct User {
    pub id: Uuid,
    pub email: String,
    pub name: String,
    pub created_at: DateTime<Utc>,
    pub updated_at: DateTime<Utc>,
}

// For INSERT operations
#[derive(Insertable)]
#[diesel(table_name = users)]
pub struct NewUser {
    pub id: Uuid,
    pub email: String,
    pub name: String,
    pub created_at: DateTime<Utc>,
    pub updated_at: DateTime<Utc>,
}

#[derive(Queryable, Selectable, Clone, Debug)]
#[diesel(table_name = posts)]
pub struct Post {
    pub id: Uuid,
    pub user_id: Uuid,
    pub title: String,
    pub content: String,
    pub created_at: DateTime<Utc>,
}

#[derive(Insertable)]
#[diesel(table_name = posts)]
pub struct NewPost {
    pub id: Uuid,
    pub user_id: Uuid,
    pub title: String,
    pub content: String,
    pub created_at: DateTime<Utc>,
}

Diesel Queries

use diesel::prelude::*;
use diesel_async::AsyncPgConnection;
use uuid::Uuid;

pub struct UserRepository;

impl UserRepository {
    // Find by ID
    pub async fn find_by_id(
        conn: &mut AsyncPgConnection,
        id: Uuid,
    ) -> Result<User, diesel::result::Error> {
        use crate::schema::users;

        users::table
            .find(id)
            .select(User::as_select())
            .first(conn)
            .await
    }

    // Find all
    pub async fn find_all(
        conn: &mut AsyncPgConnection,
    ) -> Result<Vec<User>, diesel::result::Error> {
        use crate::schema::users;

        users::table
            .order(users::created_at.desc())
            .select(User::as_select())
            .load(conn)
            .await
    }

    // Filter with WHERE clause
    pub async fn search(
        conn: &mut AsyncPgConnection,
        query: &str,
    ) -> Result<Vec<User>, diesel::result::Error> {
        use crate::schema::users;
        use diesel::dsl::like;

        let search_term = format!("%{}%", query);

        users::table
            .filter(
                like(users::email, &search_term)
                    .or(like(users::name, &search_term))
            )
            .order(users::created_at.desc())
            .select(User::as_select())
            .load(conn)
            .await
    }

    // Insert
    pub async fn create(
        conn: &mut AsyncPgConnection,
        email: &str,
        name: &str,
    ) -> Result<User, diesel::result::Error> {
        use crate::schema::users;
        use chrono::Utc;

        let new_user = NewUser {
            id: Uuid::new_v4(),
            email: email.to_string(),
            name: name.to_string(),
            created_at: Utc::now(),
            updated_at: Utc::now(),
        };

        diesel::insert_into(users::table)
            .values(new_user)
            .returning(User::as_returning())
            .get_result(conn)
            .await
    }

    // Update
    pub async fn update(
        conn: &mut AsyncPgConnection,
        id: Uuid,
        name: &str,
    ) -> Result<User, diesel::result::Error> {
        use crate::schema::users;
        use chrono::Utc;

        diesel::update(users::table.find(id))
            .set((
                users::name.eq(name),
                users::updated_at.eq(Utc::now()),
            ))
            .returning(User::as_returning())
            .get_result(conn)
            .await
    }

    // Delete
    pub async fn delete(
        conn: &mut AsyncPgConnection,
        id: Uuid,
    ) -> Result<(), diesel::result::Error> {
        use crate::schema::users;

        diesel::delete(users::table.find(id))
            .execute(conn)
            .await?;

        Ok(())
    }
}

Complex Queries with Joins

// Get user with all their posts
pub async fn find_user_with_posts(
    conn: &mut AsyncPgConnection,
    user_id: Uuid,
) -> Result<(User, Vec<Post>), diesel::result::Error> {
    use crate::schema::{users, posts};

    let user = users::table
        .find(user_id)
        .select(User::as_select())
        .first(conn)
        .await?;

    let user_posts = posts::table
        .filter(posts::user_id.eq(user_id))
        .order(posts::created_at.desc())
        .select(Post::as_select())
        .load(conn)
        .await?;

    Ok((user, user_posts))
}

// Diesel query with explicit JOIN
pub async fn find_posts_with_authors(
    conn: &mut AsyncPgConnection,
) -> Result<Vec<(Post, User)>, diesel::result::Error> {
    use crate::schema::{posts, users};

    posts::table
        .inner_join(users::table)
        .select((Post::as_select(), User::as_select()))
        .order(posts::created_at.desc())
        .load(conn)
        .await
}

Part 4: Connection Pooling

Both SQLx and Diesel benefit from connection pooling to avoid exhausting database resources:

// SQLx with connection pool
use sqlx::postgres::PgPoolOptions;

let pool = PgPoolOptions::new()
    .max_connections(10)
    .min_connections(2)
    .acquire_timeout(std::time::Duration::from_secs(30))
    .idle_timeout(std::time::Duration::from_secs(600))
    .connect(&database_url)
    .await?;

// Diesel with deadpool
use diesel_async::pooled_connection::AsyncDieselConnectionManager;
use deadpool::managed::Pool;

let config = diesel_async::pooled_connection::deadpool::ConfigBuilder::new()
    .max_size(10)
    .min_size(2)
    .build();

let pool: Pool<AsyncDieselConnectionManager<AsyncPgConnection>> = 
    Pool::builder(manager)
        .config(config)
        .build()?;

Why pooling matters:

  • Avoids creating new connections per request (expensive)
  • Limits resource usage on database server
  • Enables graceful degradation under load

Part 5: Migrations

Keep database schema in sync with code:

SQLx Migrations

# Create new migration
sqlx migrate add -r create_users_table

# File: migrations/20250101000001_create_users_table.sql
CREATE TABLE users (
    id UUID PRIMARY KEY,
    email VARCHAR(255) NOT NULL UNIQUE,
    name VARCHAR(255) NOT NULL,
    created_at TIMESTAMP NOT NULL,
    updated_at TIMESTAMP NOT NULL
);

CREATE INDEX idx_users_email ON users(email);

-- Rollback (optional)
-- DROP TABLE users;

Diesel Migrations

# Generate migration files
diesel migration generate create_users_table

# Edit: migrations/2024-01-01-000000_create_users_table/up.sql
CREATE TABLE users (
    id UUID PRIMARY KEY,
    email VARCHAR NOT NULL UNIQUE,
    name VARCHAR NOT NULL,
    created_at TIMESTAMP NOT NULL,
    updated_at TIMESTAMP NOT NULL
);

# migrations/2024-01-01-000000_create_users_table/down.sql
DROP TABLE users;

# Run migrations
diesel migration run

Part 6: Deployment Architecture

โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚   Client (Browser/Mobile)            โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
                  โ”‚
                  โ”‚ HTTP Request
                  โ–ผ
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚   Rust Web Service (Axum)            โ”‚
โ”‚                                      โ”‚
โ”‚  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”  โ”‚
โ”‚  โ”‚  Request Handler               โ”‚  โ”‚
โ”‚  โ”‚  - Parse input                 โ”‚  โ”‚
โ”‚  โ”‚  - Validate parameters         โ”‚  โ”‚
โ”‚  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜  โ”‚
โ”‚                 โ”‚                     โ”‚
โ”‚                 โ–ผ                     โ”‚
โ”‚  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”  โ”‚
โ”‚  โ”‚  Repository Layer (SQLx/Diesel)โ”‚  โ”‚
โ”‚  โ”‚  - Type-safe queries           โ”‚  โ”‚
โ”‚  โ”‚  - Handle NULL values          โ”‚  โ”‚
โ”‚  โ”‚  - Transaction management      โ”‚  โ”‚
โ”‚  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜  โ”‚
โ”‚                 โ”‚                     โ”‚
โ”‚                 โ–ผ                     โ”‚
โ”‚  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”  โ”‚
โ”‚  โ”‚  Connection Pool               โ”‚  โ”‚
โ”‚  โ”‚  - Max 10 connections          โ”‚  โ”‚
โ”‚  โ”‚  - Min 2 connections           โ”‚  โ”‚
โ”‚  โ”‚  - 30s acquire timeout         โ”‚  โ”‚
โ”‚  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜  โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
                  โ”‚
                  โ”‚ SQL Queries
                  โ–ผ
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚   PostgreSQL Database                โ”‚
โ”‚  - Tables (users, posts, etc.)       โ”‚
โ”‚  - Indexes for performance           โ”‚
โ”‚  - Transactions for consistency      โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

Part 7: Common Pitfalls & Best Practices

โŒ Pitfall: N+1 Queries

// BAD: N+1 queries (1 for users, N for each user's posts)
let users = find_all_users(&pool).await?;
for user in users {
    let posts = find_posts_by_user(&pool, user.id).await?;
    // Process posts...
}

// GOOD: Single query with JOIN or batch loading
let users_with_posts = find_users_with_posts_batch(&pool).await?;

// Or use async_graphql's DataLoader for automatic batching

Why it matters: N+1 queries scale poorly. With 1,000 users, you make 1,001 database queries instead of 1.

โŒ Pitfall: Not Using Transactions

// BAD: Multiple separate queries can partially fail
sqlx::query("UPDATE accounts SET balance = balance - 100 WHERE id = 1")
    .execute(&pool)
    .await?;

sqlx::query("UPDATE accounts SET balance = balance + 100 WHERE id = 2")
    .execute(&pool)
    .await?;
// If second fails, money disappears!

// GOOD: Transaction ensures both succeed or both fail
let mut tx = pool.begin().await?;

sqlx::query("UPDATE accounts SET balance = balance - 100 WHERE id = 1")
    .execute(&mut *tx)
    .await?;

sqlx::query("UPDATE accounts SET balance = balance + 100 WHERE id = 2")
    .execute(&mut *tx)
    .await?;

tx.commit().await?;

โŒ Pitfall: Forgetting NULL Handling

// BAD: Assumes column is never NULL
#[derive(FromRow)]
struct User {
    id: Uuid,
    email: String,
    phone: String,  // What if NULL?
}

// GOOD: Explicit NULL handling with Option
#[derive(FromRow)]
struct User {
    id: Uuid,
    email: String,
    phone: Option<String>,  // Nullable field
}

// Handle when querying
if let Some(phone) = user.phone {
    println!("Phone: {}", phone);
}

โœ… Best Practice: Repository Pattern

// Encapsulate all database access
pub trait UserRepository {
    async fn find_by_id(&self, id: Uuid) -> Result<User>;
    async fn create(&self, email: &str, name: &str) -> Result<User>;
    async fn update(&self, id: Uuid, name: &str) -> Result<User>;
    async fn delete(&self, id: Uuid) -> Result<()>;
}

// Easy to mock for testing
#[cfg(test)]
mod tests {
    use mockall::predicate::*;

    #[tokio::test]
    async fn test_get_user() {
        let mut mock = MockUserRepository::new();
        mock.expect_find_by_id()
            .with(eq(user_id))
            .returning(|_| Ok(test_user()));

        // Test with mock
    }
}

โœ… Best Practice: Use Migrations for Schema Changes

# Never modify database manually
# Always version control schema changes
sqlx migrate add add_phone_to_users
# Edit migration file...
sqlx migrate run

โœ… Best Practice: Index Important Columns

-- Frequently queried columns need indexes
CREATE INDEX idx_users_email ON users(email);
CREATE INDEX idx_posts_user_id ON posts(user_id);
CREATE INDEX idx_posts_created_at ON posts(created_at DESC);

-- Check index usage
EXPLAIN ANALYZE SELECT * FROM users WHERE email = '[email protected]';

Part 8: SQLx vs Diesel Comparison

Aspect SQLx Diesel
Approach SQL strings with compile-time verification Query builder API
Compile-Time Check Requires running database No database needed
Learning Curve Easier (you already know SQL) Steeper (learn DSL)
Query Complexity Can write any SQL Limited to DSL
Type Safety High (verified at compile time) Very High (Rust types only)
Raw SQL Supported Via sql_query()
Migrations SQLx CLI Diesel CLI
Ecosystem Growing Mature
Async Support Native Via diesel-async

When to Use SQLx

  • Complex queries (window functions, CTEs, etc.)
  • Developers comfortable with SQL
  • Faster prototyping
  • Need raw SQL fallback

When to Use Diesel

  • Want maximum type safety
  • Complex business logic
  • Prefer declarative query building
  • Long-term maintainability

Part 9: Real-World Example: Complete Repository

// src/repositories/user_repository.rs
use sqlx::PgPool;
use uuid::Uuid;
use chrono::Utc;
use crate::models::User;

pub struct UserRepository {
    pool: PgPool,
}

impl UserRepository {
    pub fn new(pool: PgPool) -> Self {
        Self { pool }
    }

    pub async fn find_by_id(&self, id: Uuid) -> Result<Option<User>, sqlx::Error> {
        sqlx::query_as::<_, User>(
            "SELECT id, email, name, created_at, updated_at FROM users WHERE id = $1"
        )
            .bind(id)
            .fetch_optional(&self.pool)
            .await
    }

    pub async fn find_by_email(&self, email: &str) -> Result<Option<User>, sqlx::Error> {
        sqlx::query_as::<_, User>(
            "SELECT id, email, name, created_at, updated_at FROM users WHERE email = $1"
        )
            .bind(email)
            .fetch_optional(&self.pool)
            .await
    }

    pub async fn create(&self, email: &str, name: &str) -> Result<User, sqlx::Error> {
        let id = Uuid::new_v4();
        let now = Utc::now();

        sqlx::query_as::<_, User>(
            "INSERT INTO users (id, email, name, created_at, updated_at) 
             VALUES ($1, $2, $3, $4, $5)
             RETURNING id, email, name, created_at, updated_at"
        )
            .bind(id)
            .bind(email)
            .bind(name)
            .bind(now)
            .bind(now)
            .fetch_one(&self.pool)
            .await
    }

    pub async fn list_paginated(
        &self,
        page: i64,
        page_size: i64,
    ) -> Result<Vec<User>, sqlx::Error> {
        let offset = (page - 1) * page_size;

        sqlx::query_as::<_, User>(
            "SELECT id, email, name, created_at, updated_at FROM users 
             ORDER BY created_at DESC
             LIMIT $1 OFFSET $2"
        )
            .bind(page_size)
            .bind(offset)
            .fetch_all(&self.pool)
            .await
    }

    pub async fn count(&self) -> Result<i64, sqlx::Error> {
        let result: (i64,) = sqlx::query_as(
            "SELECT COUNT(*) FROM users"
        )
            .fetch_one(&self.pool)
            .await?;

        Ok(result.0)
    }
}

// Usage in handler
async fn list_users(
    axum::extract::State(state): axum::extract::State<AppState>,
    axum::extract::Query(params): axum::extract::Query<PaginationParams>,
) -> axum::Json<Vec<User>> {
    let users = state.user_repository
        .list_paginated(params.page, params.page_size)
        .await
        .unwrap_or_default();

    axum::Json(users)
}

Part 10: Resources & Further Reading

Official Documentation

Articles & Guides

Books

  • Rust Web Development by Bastian Gruber (Ch. 5-6)
  • SQL Performance Explained by Markus Winand
  • PostgreSQL Up and Running by Regina O. Obe & Leo Hsu

Tools & Libraries

  • SQLx - Query builder with compile-time verification
  • Diesel - Type-safe ORM
  • SeaORM - Async ORM alternative
  • refinery - Alternative migration tool
  • sqlx-cli - Command-line tools
  • pgAdmin - PostgreSQL management UI

Part 11: Alternatives & Complementary Technologies

Tool Purpose Rust Integration
sqlc Generate Go code from SQL External tool (no native Rust binding)
SeaORM Async ORM Native Rust
Tokio-Postgres Low-level client Native Rust
sqlx-cli Migration management Native Rust
Redis Caching layer Native Rust via redis crate
Elasticsearch Search & analytics Native Rust via elasticsearch crate

When to Add Caching

// Redis caching layer reduces database load
pub async fn get_user_cached(
    user_id: Uuid,
    db: &PgPool,
    cache: &redis::Client,
) -> Result<User> {
    // Try cache first
    if let Ok(cached) = cache.get::<_, String>(&format!("user:{}", user_id)).await {
        return Ok(serde_json::from_str(&cached)?);
    }

    // Cache miss: query database
    let user = find_user(db, user_id).await?;

    // Store in cache for 1 hour
    cache.set_ex(&format!("user:{}", user_id), serde_json::to_string(&user)?, 3600).await?;

    Ok(user)
}

Part 12: Testing Database Code

#[cfg(test)]
mod tests {
    use sqlx::postgres::PgPoolOptions;
    use testcontainers::clients;
    use testcontainers::images;

    #[tokio::test]
    async fn test_create_user() {
        // Spin up test database in Docker
        let docker = clients::Cli::default();
        let postgres = docker.run(images::postgres::Postgres::default());

        let pool = PgPoolOptions::new()
            .connect(&format!(
                "postgres://postgres:[email protected]:{}/postgres",
                postgres.get_host_port_ipv4(5432),
            ))
            .await
            .unwrap();

        // Run migrations
        sqlx::migrate!("./migrations")
            .run(&pool)
            .await
            .unwrap();

        // Test code
        let repo = UserRepository::new(pool);
        let user = repo.create("[email protected]", "Test User").await.unwrap();

        assert_eq!(user.email, "[email protected]");
        assert_eq!(user.name, "Test User");
    }
}

Conclusion

Database integration in Rust combines type safety, performance, and reliability. Whether you choose SQLx for SQL-native development or Diesel for maximum type safety, Rust prevents entire classes of database-related bugs:

  1. SQL Injection - Parameterized queries prevent injection
  2. Type Mismatches - Compile-time type verification
  3. NULL Bugs - Explicit Option types
  4. Connection Leaks - Automatic pool management
  5. Consistency Issues - Transactions with ACID guarantees

By adopting repository patterns, connection pooling, migrations, and proper indexing, you build database layers that scale with your application.


Comments