Converting a replica set member into a standalone server is a destructive operation from a high-availability perspective. You lose automatic failover, oplog-based replication, and consistent read-after-write guarantees. This guide walks through every step — architecture fundamentals, pre-conversion checks, backup strategies, both secondary and primary conversion paths, application impact, and rollback planning — so you can execute the migration safely in production.
Replica Set Architecture Fundamentals
A MongoDB replica set is a group of mongod instances that maintain the same data set through asynchronous replication.
Member Roles
- Primary: The single member that accepts all write operations. Writes are recorded in the oplog (capped collection). Only one primary exists at any moment.
- Secondary: Replicates the primary’s oplog and applies operations asynchronously. Can serve reads when read preference allows. Secondaries can become primaries through election.
- Arbiter: A lightweight member that participates in elections but does not hold data. Arbiters are useful when you need an odd number of voting members without the storage cost of a full data-bearing node.
Election Process
When the primary becomes unreachable, a secondary initiates an election:
- The secondary detects it cannot reach the primary (default heartbeat interval is 2 seconds, 10-second timeout before election).
- It requests votes from all eligible members.
- Members vote based on priority, oplog freshness, and connectivity.
- The candidate with the most votes becomes the new primary.
Elections are also triggered by rs.stepDown(), rs.reconfig(), or a priority-0 member being promoted after the primary recovers.
Oplog
The oplog (local.oplog.rs) is a capped collection on every replica set member. It stores an ordered log of all write operations. Secondaries read the oplog from the primary (or another secondary) and replay operations. Oplog size is configurable — default is 5% of free disk space or 50 MB minimum on 64-bit systems. If a secondary falls too far behind (oplog window exceeded), it enters RECOVERING state and needs resyncing.
Priority and Votes
Each member has a priority (0-1000, default 1) and one vote. Priority 0 members cannot become primary. Hidden members (priority 0, hidden: true) do not appear in db.isMaster() output and are invisible to application drivers.
Understanding these fundamentals is critical before making topology changes. A conversion mistake can cause data loss, application downtime, or permanent cluster degradation.
Pre-Conversion Health Checks
Before removing a member, verify the replica set is healthy and the target member is fully caught up.
Check Replica Set Status
Connect to any member and run:
rs.status()
Look for:
members[].stateStr— should all be PRIMARY, SECONDARY, or ARBITER.members[].health— should be 1 (up) for all members.members[].lastHeartbeat— timestamps should be recent (within seconds).
Check Current Configuration
rs.conf()
Pay attention to:
members[].priority— know which member would become primary if the current primary fails.members[].votes— confirm an odd number of voters to avoid election stalemates.settings.chainingAllowed— if disabled, secondaries replicate only from the primary.
Validate Optime Lag
Optime (last applied operation timestamp) lag between primary and secondary must be minimal before converting the secondary:
// On primary — get the primary's last optime
rs.status().members.filter(m => m.stateStr === 'PRIMARY')[0].optime
// On the target secondary — get this member's optime
rs.status().members.filter(m => m.name === 'target-host:27017')[0].optime
If the timestamps differ by more than a few seconds, wait or investigate the replication lag:
// Check replication lag in seconds for each secondary
rs.printSecondaryReplicationInfo()
Output shows syncedTo timestamps. Any secondary showing significant lag needs investigation before removal — writes queued in the oplog that are not yet applied will be lost when the member leaves the replica set.
Run replSetGetStatus
db.adminCommand({ replSetGetStatus: 1 })
This returns structured JSON with detailed member states, optimes, heartbeat data, and write concern information. Use it in monitoring scripts for pre-migration validation.
Verify Oplog Window
// Check how far back the oplog goes (in seconds)
db.getSiblingDB('local').oplog.rs.stats().maxSize
// Find the earliest and latest timestamps
db.getSiblingDB('local').oplog.rs.find().sort({ts: 1}).limit(1).next().ts
db.getSiblingDB('local').oplog.rs.find().sort({ts: -1}).limit(1).next().ts
If the oplog window is narrow (e.g., less than 1 hour on a busy system), schedule conversion during low traffic to minimize risk.
Final Pre-Conversion Checklist
| Check | Command | Pass Condition |
|---|---|---|
| All members healthy | rs.status() |
health=1 for all |
| Target member SECONDARY | rs.status() |
stateStr=SECONDARY |
| Optime lag minimal | rs.printSecondaryReplicationInfo() |
< 5 seconds lag |
| Oplog window adequate | oplog.ts query | > 2 hours window |
| Backup completed | verify backup files | backup exists and testable |
| Application informed | team communication | downtime approved |
Backup Strategies
Never convert a replica set member without a verified backup. Choose a strategy based on your data size, recovery time objective (RTO), and recovery point objective (RPO).
mongodump with Options
For databases under 100 GB, mongodump is straightforward:
# Full backup from secondary to avoid primary load
mongodump --host secondary-host:27017 \
--out /backup/mongodb/pre-conversion-$(date +%Y%m%d) \
--gzip \
--oplog \
--numParallelCollections 4
# Verify the backup by restoring to a test instance
mongorestore --dryRun \
--dir /backup/mongodb/pre-conversion-20260426 \
--gzip
Use --oplog to capture point-in-time consistent backups across collections. Without --oplog, different collections may reflect different write states.
Filesystem Snapshots
For databases over 100 GB, use LVM snapshots or cloud provider snapshots (EBS, persistent disk):
# LVM snapshot (requires journaling enabled)
# 1. Flush writes and lock the database
mongosh admin --eval "db.fsyncLock()"
# 2. Create LVM snapshot
lvcreate -L 10G -s -n mongo_snap /dev/vg0/mongodb
# 3. Unlock the database
mongosh admin --eval "db.fsyncUnlock()"
# 4. Mount and copy data
mkdir /mnt/mongo_snap
mount /dev/vg0/mongo_snap /mnt/mongo_snap
rsync -av /mnt/mongo_snap/ /backup/mongodb/snapshot-20260426/
umount /mnt/mongo_snap
lvremove /dev/vg0/mongo_snap
DBFS (Database File System) Backup Using mongodb-backup Script
For automated periodic backups, use a wrapper script:
#!/bin/bash
# backup-mongo.sh — creates timestamped, compressed backup
set -euo pipefail
BACKUP_DIR="/backup/mongodb"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
HOST="${1:-localhost:27017}"
BACKUP_PATH="${BACKUP_DIR}/pre-conversion-${TIMESTAMP}"
log_info() { echo "[INFO] $(date +%H:%M:%S) $*"; }
log_info "Starting backup from ${HOST}"
mkdir -p "${BACKUP_PATH}"
mongodump --host "${HOST}" \
--out "${BACKUP_PATH}" \
--gzip \
--oplog \
--numParallelCollections 4
if [ $? -eq 0 ]; then
tar -czf "${BACKUP_PATH}.tar.gz" -C "${BACKUP_DIR}" "pre-conversion-${TIMESTAMP}"
rm -rf "${BACKUP_PATH}"
log_info "Backup written to ${BACKUP_PATH}.tar.gz"
else
log_info "Backup FAILED"
exit 1
fi
Backup Verification
A backup you cannot restore is worthless. Test restoration on a separate instance:
mongorestore --drop --gzip --dir /backup/mongodb/pre-conversion-20260426.tar.gz
Run validation checks after restore:
// Compare document counts on a few key collections
use myapp
db.users.countDocuments()
db.orders.countDocuments()
Converting a Secondary (Recommended)
This is the safest path — the secondary holds the same data as the primary, and removing it does not affect availability.
Step 1: Verify the Secondary Is Caught Up
Connect to the primary and confirm the target secondary’s optime is current:
// On primary
const members = rs.status().members;
members.forEach(m => {
print(`${m.name}: state=${m.stateStr}, health=${m.health}`);
});
Step 2: Remove the Secondary from the Replica Set
// On primary
rs.remove('secondary-hostname:27017')
Confirm removal:
rs.status() // member should no longer appear
Step 3: Stop mongod on the Removed Host
# systemd-managed instance
sudo systemctl stop mongod
# Confirm stopped
sudo systemctl status mongod # should show inactive/dead
Step 4: Remove Replication Configuration
Edit /etc/mongod.conf:
# Before — replication configured as replica set member
storage:
dbPath: /var/lib/mongodb
net:
bindIp: 0.0.0.0
port: 27017
replication:
replSetName: rs0
security:
authorization: enabled
# After — replication section removed
storage:
dbPath: /var/lib/mongodb
net:
bindIp: 0.0.0.0
port: 27017
security:
authorization: enabled
Verify no --replSet flag in the systemd unit file:
sudo grep -r replSet /etc/systemd/system/mongod.service /etc/default/mongod
Step 5: Remove Local Replication Data
The local database contains replica set metadata (oplog, replset config, authentication cache). While mongod will start without it, cleaning it avoids stale metadata:
# Remove local database files (replication-specific collections)
sudo rm -rf /var/lib/mongodb/local.*
# Or let mongod handle it — it will ignore local data if no --replSet is set
Step 6: Restart as Standalone
sudo systemctl start mongod
# Check logs for clean startup
sudo journalctl -u mongod --since "5 minutes ago" | tail -20
Step 7: Verify Standalone Mode
Connect to the instance and confirm it is no longer part of a replica set:
db.isMaster()
Output should show {"ismaster": true, ...} without setName. Additional verification:
// Confirm no replication config
rs.conf()
// Should error: "no replset config has been received"
// Confirm oplog is gone or inactive
use local
show collections
// oplog.rs should not appear
Converting a Primary
Converting a primary requires extra care because removing the active primary triggers an election and temporarily halts writes.
Step 1: Step Down the Primary
Connect to the primary and force it to relinquish its role:
rs.stepDown(120)
The parameter (120 seconds) is the time window during which the stepped-down member will not seek primary re-election. Choose a value long enough to complete the removal.
Step 2: Verify New Primary Is Elected
rs.status()
Confirm a new member has stateStr: "PRIMARY" and the old primary is now SECONDARY.
Step 3: Drain Connections and Remove
Once the old primary is secondary, remove it:
// On the new primary
rs.remove('old-primary-hostname:27017')
Step 4: Stop, Reconfigure, Restart
Follow steps 3-7 from the secondary conversion: stop mongod, edit mongod.conf to remove replication.replSetName, remove local database replication data, restart, and verify standalone mode.
Emergency: Lonely Primary with No Other Data Members
If your replica set had only one data-bearing member (plus possibly an arbiter), stepping down will make the set unavailable for writes (no eligible candidate). In this case:
- Schedule a maintenance window.
- Stop all application traffic.
- Stop mongod on the lone primary.
- Edit the config to remove replication.
- Restart as standalone.
This is equivalent to promoting the single member to standalone, but the replica set effectively ceases to exist. You must update all connection strings.
Handling Applications During Conversion
Application drivers that use the replica set connection string format (mongodb://host1,host2,host3/?replicaSet=rs0) will detect topology changes and may need reconfiguration.
Connection String Updates
After conversion, the standalone instance uses a simple connection string:
## Before (replica set)
mongodb://user:pass@host1:27017,host2:27017,host3:27017/mydb?replicaSet=rs0
## After (standalone)
mongodb://user:pass@standalone-host:27017/mydb
Driver-Specific Examples
Go (mongo-go-driver):
// Before — replica set connection
client, err := mongo.Connect(ctx, options.Client().ApplyURI(
"mongodb://user:pass@host1:27017,host2:27017/mydb?replicaSet=rs0",
))
// After — standalone connection
client, err := mongo.Connect(ctx, options.Client().ApplyURI(
"mongodb://user:pass@standalone-host:27017/mydb",
))
Node.js (mongodb driver):
// Before
const client = new MongoClient(
'mongodb://user:pass@host1:27017,host2:27017/mydb?replicaSet=rs0'
);
// After
const client = new MongoClient(
'mongodb://user:pass@standalone-host:27017/mydb'
);
Python (pymongo):
# Before
client = MongoClient(
'mongodb://user:pass@host1:27017,host2:27017/mydb?replicaSet=rs0'
)
# After
client = MongoClient(
'mongodb://user:pass@standalone-host:27017/mydb'
)
Driver Retry Logic and Read Preferences
Replica set drivers perform automatic failover. Standalone drivers have no failover — if the server goes down, the connection fails. Ensure your application handles connection errors with retry logic:
// Retry wrapper for standalone connections
func withRetry(ctx context.Context, fn func() error, maxRetries int) error {
var err error
for i := 0; i < maxRetries; i++ {
if err = fn(); err == nil {
return nil
}
select {
case <-ctx.Done():
return ctx.Err()
case <-time.After(time.Duration(math.Pow(2, float64(i))) * time.Second):
}
}
return fmt.Errorf("operation failed after %d retries: %w", maxRetries, err)
}
Blue-Green Conversion Pattern
To minimize application impact:
- Deploy the standalone instance alongside the replica set (new server, same data restored from backup).
- Point a subset of applications (canary) to the standalone.
- Validate all operations work correctly.
- Swap all traffic to standalone.
- Decommission the replica set members.
Post-Conversion Verification
Data Integrity Checks
Run validation on the standalone instance:
// Validate all collections for corruption
db.getMongo().getDBs().databases.forEach(dbInfo => {
const db = db.getSiblingDB(dbInfo.name);
db.getCollectionNames().forEach(collName => {
const result = db.runCommand({ validate: collName, full: true });
if (!result.valid) {
print(`CORRUPTION: ${dbInfo.name}.${collName}: ${result.errors}`);
} else {
print(`OK: ${dbInfo.name}.${collName}`);
}
});
});
Document Count Verification
Compare document counts against the last known good backup:
#!/bin/bash
# verify-counts.sh — compare document counts between current and backup
set -euo pipefail
STANDALONE_HOST="${1}"
BACKUP_PATH="${2}"
echo "=== Verifying document counts ==="
# Get counts from standalone
mongosh --host "${STANDALONE_HOST}" --quiet --eval "
db.getMongo().getDBs().databases.forEach(d => {
const db = db.getSiblingDB(d.name);
db.getCollectionNames().forEach(c => {
print(d.name + '.' + c + ': ' + db.getCollection(c).countDocuments());
});
});
" > /tmp/standalone-counts.txt
# Get backup info
# Compare — if counts differ significantly, flag for review
echo "Comparison complete — review /tmp/standalone-counts.txt against backup"
Application-Level Testing
Run a smoke test suite against the standalone:
#!/bin/bash
# smoke-test.sh — basic read/write verification
set -euo pipefail
HOST="${1:-localhost:27017}"
mongosh --host "${HOST}" --quiet <<'EOF'
const TEST_DB = 'smoke_test';
const TEST_COLL = 'verification';
// Write test
db.getSiblingDB(TEST_DB).getCollection(TEST_COLL).insertOne({
test: true,
timestamp: new Date(),
host: db.hostInfo().system.hostname
});
// Read test
const doc = db.getSiblingDB(TEST_DB).getCollection(TEST_COLL).findOne({ test: true });
print(`Write/Read: ${doc ? 'PASS' : 'FAIL'}`);
// Index test
const indexes = db.getSiblingDB(TEST_DB).getCollection(TEST_COLL).getIndexes();
print(`Indexes: ${indexes.length > 0 ? 'PASS' : 'FAIL'} (found ${indexes.length})`);
// Cleanup
db.getSiblingDB(TEST_DB).dropDatabase();
print('Smoke test complete');
EOF
Performance Baseline
Establish a performance baseline post-conversion to detect regressions:
// Basic write throughput test
const bench = db.getSiblingDB('perf_test').bench;
const docs = [];
for (let i = 0; i < 10000; i++) {
docs.push({ _id: i, value: Math.random(), ts: new Date() });
}
const start = Date.now();
bench.insertMany(docs, { ordered: false });
const elapsed = Date.now() - start;
print(`Inserted 10,000 documents in ${elapsed}ms (${(10000/elapsed*1000).toFixed(0)} ops/sec)`);
db.getSiblingDB('perf_test').dropDatabase();
Compare this baseline against pre-conversion metrics from your monitoring system (e.g., MongoDB Atlas, Prometheus + mongodb_exporter, or mongostat).
Restoring Back to a Replica Set
If the standalone conversion causes issues, roll back:
Step 1: Stop the Standalone
sudo systemctl stop mongod
Step 2: Add Replication Config Back
Edit /etc/mongod.conf:
replication:
replSetName: rs0
Step 3: Clear Local Database
Remove stale replication metadata:
sudo rm -rf /var/lib/mongodb/local.*
Step 4: Restart as Standalone (Temporarily)
sudo systemctl start mongod
Step 5: Initialize or Re-add to the Replica Set
If this was the original set and other members still exist:
sudo systemctl stop mongod
Then restart with --replSet:
sudo systemctl start mongod
rs.add('standalone-hostname:27017')
If the other members are gone and you need to recreate the set:
rs.initiate({
_id: 'rs0',
members: [
{ _id: 0, host: 'standalone-hostname:27017' }
]
})
Alternative Approaches: Single-Node Replica Set vs Standalone
Running a single-member replica set preserves the replication protocol while operating on one server. It is not the same as a true standalone.
Comparison Table
| Feature | Standalone | Single-Node Replica Set | Multi-Node Replica Set |
|---|---|---|---|
| Oplog | No | Yes | Yes |
| Automatic failover | No | No | Yes |
| Read concern majority | No | Yes | Yes |
| Write concern majority | No | No (wins locally) | Yes |
| Change streams | No | Yes | Yes |
| Transactions | Limited | Yes | Yes |
| Application driver URI | Simple | replicaSet= required | replicaSet= required |
| Resync from another node | No | No | Yes |
| Monitoring tools | Fewer metrics | Full replica set metrics | Full replica set metrics |
| Disk usage | Lower (no oplog) | Higher (oplog storage) | Higher |
| Backup method | mongodump/snapshot | mongodump/snapshot | mongodump/secondary backup |
| Best for | Dev, single-server prod | Local dev parity, testing | Production HA |
When to Choose Single-Node Replica Set
- You need change streams for event-driven applications.
- Your application uses transactions (requires replica set even with one member).
- You want replica set monitoring and tooling without standing up extra nodes.
When to Choose Standalone
- You want the simplest possible deployment.
- Resource constraints (no oplog overhead).
- Migration from replica set to a different topology (e.g., sharded cluster).
Docker and Kubernetes Environments
Docker Conversion
For MongoDB running in a Docker container with a bind-mounted data volume:
# Stop and remove the container
docker stop mongodb-rs && docker rm mongodb-rs
# Start a new container WITHOUT --replSet flag
docker run -d \
--name mongodb-standalone \
-p 27017:27017 \
-v mongo-data:/data/db \
mongo:7.0 \
mongod --auth
# Or use environment variable override
docker run -d \
--name mongodb-standalone \
-p 27017:27017 \
-v mongo-data:/data/db \
-e MONGO_INITDB_DATABASE=mydb \
mongo:7.0
For Docker Compose:
# docker-compose.yml — standalone MongoDB
version: '3.8'
services:
mongodb:
image: mongo:7.0
container_name: mongodb-standalone
ports:
- "27017:27017"
volumes:
- mongo_data:/data/db
environment:
MONGO_INITDB_ROOT_USERNAME: admin
MONGO_INITDB_ROOT_PASSWORD: secure_password
command: ["mongod", "--auth"]
volumes:
mongo_data:
Kubernetes (StatefulSet)
For MongoDB in Kubernetes, convert a StatefulSet pod to standalone by removing the replica set configuration from the Pod spec and using a standalone-optimized ConfigMap:
# mongod-standalone-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: mongod-standalone-config
data:
mongod.conf: |
storage:
dbPath: /data/db
net:
bindIp: 0.0.0.0
port: 27017
security:
authorization: enabled
Remove the --replSet argument from the container command in the StatefulSet or replace with standalone:
containers:
- name: mongodb
image: mongo:7.0
command: ["mongod", "--config", "/etc/mongod.conf"]
volumeMounts:
- name: config
mountPath: /etc/mongod.conf
subPath: mongod.conf
Apply with zero downtime by using a second service:
kubectl apply -f mongod-standalone-configmap.yaml
kubectl scale sts mongodb --replicas=0
kubectl delete pod mongodb-0 # if running
# Deploy standalone pod or deployment pointing to same PVC
Automation Script
For repeatable conversions, use a controlled bash script:
#!/bin/bash
# convert-to-standalone.sh — automated replica-to-standalone conversion
set -euo pipefail
# Configuration
MONGO_HOST="${1:-localhost}"
MONGO_PORT="${2:-27017}"
MONGO_CONF="${3:-/etc/mongod.conf}"
MONGO_DATA="${4:-/var/lib/mongodb}"
BACKUP_DIR="${5:-/backup/mongodb/pre-conversion}"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
RED='\033[0;31m'; GREEN='\033[0;32m'; YELLOW='\033[1;33m'; NC='\033[0m'
log_info() { echo -e "${GREEN}[INFO]${NC} $(date +%H:%M:%S) $*"; }
log_warn() { echo -e "${YELLOW}[WARN]${NC} $(date +%H:%M:%S) $*"; }
log_error() { echo -e "${RED}[ERROR]${NC} $(date +%H:%M:%S) $*"; }
# Pre-flight checks
log_info "Verifying MongoDB is reachable..."
mongosh --host "${MONGO_HOST}:${MONGO_PORT}" --quiet --eval "db.version()" || {
log_error "Cannot connect to MongoDB at ${MONGO_HOST}:${MONGO_PORT}"
exit 1
}
# Step 1: Check if member is primary
IS_PRIMARY=$(mongosh --host "${MONGO_HOST}:${MONGO_PORT}" --quiet --eval \
"db.isMaster().ismaster" 2>/dev/null)
if [ "${IS_PRIMARY}" = "true" ]; then
log_warn "This host is the PRIMARY! Stepping down..."
mongosh --host "${MONGO_HOST}:${MONGO_PORT}" --quiet --eval "rs.stepDown(120)"
log_info "Waiting for election to settle..."
sleep 30
fi
# Step 2: Get replSet name from config
RS_NAME=$(mongosh --host "${MONGO_HOST}:${MONGO_PORT}" --quiet --eval \
"rs.conf()._id" 2>/dev/null || echo "unknown")
log_info "Replica set name: ${RS_NAME}"
# Step 3: Backup
log_info "Creating backup to ${BACKUP_DIR}/${TIMESTAMP}..."
mkdir -p "${BACKUP_DIR}/${TIMESTAMP}"
mongodump --host "${MONGO_HOST}:${MONGO_PORT}" \
--out "${BACKUP_DIR}/${TIMESTAMP}" \
--gzip --oplog
log_info "Backup complete: ${BACKUP_DIR}/${TIMESTAMP}"
# Step 4: Remove from replica set if not already standalone
if [ "${RS_NAME}" != "unknown" ]; then
log_info "Removing ${MONGO_HOST}:${MONGO_PORT} from replica set..."
# This must run on the primary — assume we can reach it
PRIMARY_HOST=$(mongosh --host "${MONGO_HOST}:${MONGO_PORT}" --quiet --eval \
"rs.status().members.find(m => m.stateStr === 'PRIMARY').name" 2>/dev/null)
if [ -n "${PRIMARY_HOST}" ] && [ "${PRIMARY_HOST}" != "${MONGO_HOST}:${MONGO_PORT}" ]; then
mongosh --host "${PRIMARY_HOST}" --quiet --eval \
"rs.remove('${MONGO_HOST}:${MONGO_PORT}')" || true
fi
sleep 5
fi
# Step 5: Stop mongod
log_info "Stopping mongod..."
sudo systemctl stop mongod
# Step 6: Patch config
log_info "Removing replication config from ${MONGO_CONF}..."
sudo cp "${MONGO_CONF}" "${MONGO_CONF}.bak.${TIMESTAMP}"
sudo sed -i '/^replication:/,/^[a-z]/ s/^/#/' "${MONGO_CONF}"
# Step 7: Remove local database replication files
log_info "Cleaning local database replication metadata..."
sudo rm -rf "${MONGO_DATA}/local.*" 2>/dev/null || true
# Step 8: Restart
log_info "Starting mongod as standalone..."
sudo systemctl start mongod
sleep 5
# Step 9: Verify
VERIFY=$(mongosh --host "${MONGO_HOST}:${MONGO_PORT}" --quiet --eval \
"db.isMaster().hasOwnProperty('setName')" 2>/dev/null)
if [ "${VERIFY}" = "false" ]; then
log_info "Conversion successful — ${MONGO_HOST}:${MONGO_PORT} is now a standalone server."
else
log_error "Conversion may have failed — replica set config still detected."
exit 1
fi
Troubleshooting Decision Tree
Use this decision tree when issues arise during conversion:
1. mongod fails to start after conversion?
├─ Check logs: /var/log/mongodb/mongod.log or journalctl -u mongod
├─ Error: "replica set config not found or invalid"?
│ └─ Still has --replSet flag → remove from systemd unit / config
├─ Error: "dbpath already contains local database with different replSet name"?
│ └─ Clear /var/lib/mongodb/local.* files and restart
├─ Error: permission denied on dbpath?
│ └─ chown -R mongodb:mongodb /var/lib/mongodb
└─ Error: port already in use?
└─ Check for another mongod process: ps aux | grep mongo
2. Data appears incomplete after conversion?
├─ Secondary was not caught up before removal?
│ └─ Restore from backup taken before conversion
├─ Oplog window too small — transactions lost?
│ └─ Restore from backup; resync was needed before removal
├─ Check validation: db.runCommand({validate: "collection", full: true})
└─ Compare with backup: mongodump count vs current count
3. Application cannot connect to standalone?
├─ Connection string still uses replicaSet= parameter?
│ └─ Remove replicaSet from URI
├─ Authentication failing?
│ └─ Standalone uses same auth db and credentials — verify user exists
├─ Driver performing SRV DNS lookup that fails?
│ └─ Use direct connection string without +srv
├─ Read preference configured as secondary?
│ └─ Standalone only supports primary reads — change to primary
└─ Network/firewall blocking new port or host?
4. Replica set elections loop after removal?
├─ Remaining members have even number of votes?
│ └─ Add arbiter or remove a voting member to make odd
├─ Remaining members cannot reach each other?
│ └─ Check network, firewall, and DNS resolution
└─ Primary priority is 0?
└─ rs.reconfig() to set at least one member with priority > 0
5. Data divergence between converted standalone and original set?
├─ Converted from a secondary that had replication lag?
│ └─ Do not use this data — restore from backup or primary
├─ Writes occurred on standalone after removal?
│ └─ This is expected — standalone is now independent
└─ Need to merge data back?
└─ Use mongodump from standalone, mongorestore to primary
Summary
| Step | Secondary Conversion | Primary Conversion |
|---|---|---|
| Backup | Required (mongodump/snapshot) | Required |
| Pre-checks | Verify catch-up, oplog window | Verify catch-up, oplog window |
| Removal | rs.remove() on primary | rs.stepDown() then rs.remove() |
| Config | Remove replSetName | Remove replSetName |
| Data dir | Clear local.* files | Clear local.* files |
| Restart | Start without –replSet | Start without –replSet |
| Verify | db.isMaster() no setName | db.isMaster() no setName |
| Applications | Update connection strings | Update connection strings |
Converting a secondary is straightforward and safest: remove it from the replica set, then restart without replication configured. Converting a primary requires stepping it down first. Always backup, verify data integrity post-conversion, and update application connection strings. Keep a rollback plan — restoring the original replica set config and re-adding the member — for any production conversion.
Tip: Before converting in production, run the full procedure in a staging environment with a copy of your data. Measure application behavior, monitoring coverage, and recovery time.
Resources
- MongoDB Replica Set Architecture Documentation
- MongoDB rs.remove() Reference
- MongoDB rs.stepDown() Reference
- MongoDB Replica Set Elections
- MongoDB Oplog Size Configuration
- MongoDB Backup Methods
- MongoDB Connection String URI Format
- MongoDB Change Streams
- Docker MongoDB Official Image
- Kubernetes MongoDB StatefulSet Example
Comments