Dump and Restore Procedures Guide
Dump and Restore Procedures Guide
Version: 3.1.0 Status: Production-Ready Last Updated: 2025-12-08
Table of Contents
- Overview
- Dump Operations
- Restore Operations
- Disaster Recovery
- Automated Backup Scheduling
- Storage Planning
- Performance Optimization
- Verification and Validation
- Troubleshooting
- Real-World Examples
Overview
HeliosDB Nano’s dump/restore system provides user-controlled persistence for in-memory databases. This feature enables:
- Manual data snapshots: Dump database state to portable files
- Incremental backups: Append-only dumps for changed data
- Point-in-time recovery: Restore to specific dump snapshots
- Disaster recovery: Automated backup and restoration workflows
- Cross-environment migration: Move data between instances
Dump File Format
HeliosDB dumps use the .heliodump format with the following characteristics:
- Version-stamped: Forward-compatible with future releases
- Compressed: Optional zstd or lz4 compression (50-80% reduction)
- Portable: Platform-independent binary format
- Schema-aware: Includes table definitions, indexes, and constraints
- Incremental-capable: Tracks LSN markers for delta dumps
File Structure:
┌─────────────────────────┐│ Magic: "HELIODMP" ││ Version: 3.1.0 │├─────────────────────────┤│ Metadata Header ││ - dump_id (UUID) ││ - created_at ││ - mode (full/inc) ││ - last_lsn ││ - compression type │├─────────────────────────┤│ Schema Definitions ││ - Tables ││ - Indexes ││ - Constraints │├─────────────────────────┤│ Data Batches ││ - Row batches ││ - Compressed chunks │├─────────────────────────┤│ Footer ││ - Checksum (CRC32) ││ - Statistics │└─────────────────────────┘Dump Operations
Basic Dump Command
# Full database dumpheliosdb-nano dump --output /backups/mydb.heliodump
# Dump with compression (recommended)heliosdb-nano dump --output /backups/mydb.heliodump --compress zstd
# Dump specific tablesheliosdb-nano dump --output /backups/users-only.heliodump --tables users,sessions
# Dump with custom compression levelheliosdb-nano dump --output /backups/mydb.heliodump --compress zstd --compression-level 9Full vs Incremental Dumps
Full Dump
Exports entire database state:
heliosdb-nano dump --output /backups/full-$(date +%Y%m%d).heliodump --compress zstdCharacteristics:
- Contains all tables, rows, indexes
- Self-contained (no dependencies)
- Larger file size
- Slower to create
- Use for: Weekly/monthly backups, archival, migrations
Incremental Dump (Append Mode)
Exports only changes since last dump:
# First dump (full)heliosdb-nano dump --output /backups/base.heliodump
# Subsequent dumps (incremental)heliosdb-nano dump --output /backups/base.heliodump --append
# Incremental dumps track LSN automaticallyCharacteristics:
- Only changed/new data
- Smaller, faster
- Requires base dump
- Use for: Hourly/daily backups, continuous archival
Compression Options
HeliosDB supports multiple compression algorithms:
| Algorithm | Ratio | Speed | CPU | Use Case |
|---|---|---|---|---|
| none | 1.0x | Fastest | Minimal | Fast backups, pre-compressed data |
| lz4 | 2.0-3.0x | Fast | Low | Frequent backups, balanced |
| zstd (default) | 3.0-5.0x | Medium | Medium | Archival, storage-constrained |
| zstd -9 | 4.0-7.0x | Slow | High | Long-term archival, rarely accessed |
Example:
# Fast compression (LZ4)heliosdb-nano dump --output fast.heliodump --compress lz4
# Balanced compression (zstd level 3, default)heliosdb-nano dump --output balanced.heliodump --compress zstd
# Maximum compression (zstd level 19)heliosdb-nano dump --output archive.heliodump --compress zstd --compression-level 19Selective Table Dumps
# Dump single tableheliosdb-nano dump --output users.heliodump --tables users
# Dump multiple tables (comma-separated)heliosdb-nano dump --output critical.heliodump --tables users,orders,payments
# Dump all except specific tables (exclusion requires scripting)# List all tables, filter, then dumpheliosdb-nano tables | grep -v 'logs\|temp' | xargs -I{} heliosdb-nano dump --output prod.heliodump --tables {}Monitoring Dump Progress
# Run dump with verbose outputheliosdb-nano dump --output backup.heliodump --verbose
# Output example:# [2025-12-08 10:00:00] Starting dump...# [2025-12-08 10:00:01] Dumping table 'users' (1/5)# [2025-12-08 10:00:02] Exported 10,000 rows (1.2 MB)# [2025-12-08 10:00:03] Dumping table 'orders' (2/5)# [2025-12-08 10:00:05] Exported 50,000 rows (8.5 MB)# ...# [2025-12-08 10:00:30] Dump complete: 5 tables, 250,000 rows, 45.2 MB (compressed: 12.1 MB)Checking Dirty State
Before dumping, check if there are uncommitted changes:
-- Check if database has unsaved changesSELECT pg_stat_get_dirty_bytes() as dirty_bytes, pg_stat_get_dirty_bytes() > 0 as needs_dump;
-- View dirty state by tableSELECT table_name, pg_table_dirty_bytes(table_name) as dirty_bytesFROM information_schema.tablesWHERE table_schema = 'public'ORDER BY dirty_bytes DESC;Restore Operations
Basic Restore Command
# Restore full dumpheliosdb-nano restore --input /backups/mydb.heliodump
# Restore specific tablesheliosdb-nano restore --input /backups/mydb.heliodump --tables users,sessions
# Restore to running instance (requires connection params)heliosdb-nano restore --input /backups/mydb.heliodump --host localhost --port 5432Restore Modes
1. Clean Restore (Default)
Drops existing data and restores from dump:
heliosdb-nano restore --input backup.heliodump --mode cleanWarning: This will delete all existing data!
2. Append Restore
Adds dump data to existing database (does not drop tables):
heliosdb-nano restore --input backup.heliodump --mode appendConflicts: Primary key conflicts will cause errors unless --on-conflict is specified:
# Skip conflicting rowsheliosdb-nano restore --input backup.heliodump --mode append --on-conflict skip
# Update conflicting rows (upsert)heliosdb-nano restore --input backup.heliodump --mode append --on-conflict update3. Incremental Restore
Restore base dump + incremental dumps in sequence:
# Restore baseheliosdb-nano restore --input /backups/base.heliodump
# Apply incremental dumps in orderheliosdb-nano restore --input /backups/base.heliodump.inc1 --mode appendheliosdb-nano restore --input /backups/base.heliodump.inc2 --mode appendheliosdb-nano restore --input /backups/base.heliodump.inc3 --mode appendRestore Validation
# Dry-run mode (verify without restoring)heliosdb-nano restore --input backup.heliodump --dry-run
# Output:# Dump file: backup.heliodump# Version: 3.1.0 (compatible)# Created: 2025-12-08 10:00:00# Tables: users, orders, payments, sessions, logs# Total rows: 250,000# Compressed size: 12.1 MB# Uncompressed size: 45.2 MB# Estimated restore time: ~30 seconds# Validation: PASSEDDisaster Recovery
Automated Backup Strategy
Recommended 3-2-1 Backup Strategy:
- 3 copies: Production + 2 backups
- 2 media types: Local disk + cloud storage
- 1 offsite: Cloud or remote datacenter
Example DR Workflow
Daily Backup Automation
#!/bin/bashset -e
BACKUP_DIR="/var/backups/heliosdb"DATE=$(date +%Y%m%d-%H%M%S)RETENTION_DAYS=30
# Create backup directorymkdir -p "$BACKUP_DIR/daily"
# Dump databaseheliosdb-nano dump \ --output "$BACKUP_DIR/daily/heliosdb-$DATE.heliodump" \ --compress zstd \ --compression-level 3 \ --verbose
# Upload to S3 (optional)aws s3 cp "$BACKUP_DIR/daily/heliosdb-$DATE.heliodump" \ s3://my-bucket/heliosdb-backups/daily/
# Cleanup old backupsfind "$BACKUP_DIR/daily" -name "*.heliodump" -mtime +$RETENTION_DAYS -delete
# Log successecho "[$DATE] Backup completed successfully" >> "$BACKUP_DIR/backup.log"Hourly Incremental Backups
#!/bin/bashset -e
BACKUP_DIR="/var/backups/heliosdb/incremental"DATE=$(date +%Y%m%d-%H%M%S)BASE_DUMP="$BACKUP_DIR/base.heliodump"
# Create base dump if missingif [ ! -f "$BASE_DUMP" ]; then heliosdb-nano dump --output "$BASE_DUMP" --compress zstd echo "[$DATE] Created base dump" >> "$BACKUP_DIR/backup.log"else # Append incremental changes heliosdb-nano dump --output "$BASE_DUMP" --append --compress zstd echo "[$DATE] Incremental backup appended" >> "$BACKUP_DIR/backup.log"fi
# Rotate base dump weeklyif [ "$(date +%u)" -eq 1 ] && [ "$(date +%H)" -eq 0 ]; then mv "$BASE_DUMP" "$BACKUP_DIR/base-$(date +%Y%m%d).heliodump" echo "[$DATE] Rotated base dump" >> "$BACKUP_DIR/backup.log"fiDisaster Recovery Procedure
Scenario: Production server crashes, need to restore from backup
# Step 1: Provision new server# (Manual step: Create new VM/container)
# Step 2: Install HeliosDB Nanowget https://releases.heliosdb.com/v3.1.0/heliosdb-nano-linux-amd64.tar.gztar xzf heliosdb-nano-linux-amd64.tar.gzsudo mv heliosdb-nano /usr/local/bin/
# Step 3: Download latest backupaws s3 cp s3://my-bucket/heliosdb-backups/daily/heliosdb-latest.heliodump ./
# Step 4: Start in-memory instanceheliosdb-nano start --memory --port 5432 &DB_PID=$!sleep 5 # Wait for startup
# Step 5: Restore from backupheliosdb-nano restore --input heliosdb-latest.heliodump --host localhost --port 5432
# Step 6: Verify restorationpsql -h localhost -p 5432 -c "SELECT COUNT(*) FROM users;"psql -h localhost -p 5432 -c "SELECT MAX(created_at) FROM orders;"
# Step 7: (Optional) Dump to persistent storage if switching modesheliosdb-nano dump --output /data/heliosdb-restored.heliodump
# Step 8: Update application connection strings# (Manual step: Update config/environment variables)
echo "Disaster recovery completed at $(date)"Testing DR Procedures
Monthly DR Drill:
#!/bin/bashset -e
TEST_DIR="/tmp/heliosdb-dr-test-$(date +%s)"mkdir -p "$TEST_DIR"
echo "Starting DR test..."
# 1. Download latest backupaws s3 cp s3://my-bucket/heliosdb-backups/daily/heliosdb-latest.heliodump "$TEST_DIR/"
# 2. Start test instanceheliosdb-nano start --memory --port 15432 --data-dir "$TEST_DIR/db" &TEST_PID=$!sleep 5
# 3. Restore backupheliosdb-nano restore --input "$TEST_DIR/heliosdb-latest.heliodump" --host localhost --port 15432
# 4. Run validation queriespsql -h localhost -p 15432 -c "SELECT COUNT(*) FROM users;" > "$TEST_DIR/user_count.txt"psql -h localhost -p 15432 -c "SELECT COUNT(*) FROM orders;" > "$TEST_DIR/order_count.txt"
# 5. Compare with production counts (fetch via monitoring API)PROD_USERS=$(curl -s https://monitoring.example.com/metrics/user_count)TEST_USERS=$(cat "$TEST_DIR/user_count.txt" | grep -oP '\d+')
if [ "$PROD_USERS" -eq "$TEST_USERS" ]; then echo "DR Test PASSED: User counts match"else echo "DR Test FAILED: User count mismatch (prod: $PROD_USERS, test: $TEST_USERS)" exit 1fi
# 6. Cleanupkill $TEST_PIDrm -rf "$TEST_DIR"
echo "DR test completed successfully"Automated Backup Scheduling
Using Cron
# Edit crontabcrontab -e
# Add backup jobs:
# Daily full backup at 2 AM0 2 * * * /usr/local/bin/heliosdb-daily-backup.sh >> /var/log/heliosdb-backup.log 2>&1
# Hourly incremental backup0 * * * * /usr/local/bin/heliosdb-hourly-incremental.sh >> /var/log/heliosdb-incremental.log 2>&1
# Weekly DR test on Sundays at 3 AM0 3 * * 0 /usr/local/bin/heliosdb-dr-test.sh >> /var/log/heliosdb-dr-test.log 2>&1Using systemd Timers
Create /etc/systemd/system/heliosdb-backup.service:
[Unit]Description=HeliosDB Daily BackupAfter=network.target
[Service]Type=oneshotUser=heliosdbExecStart=/usr/local/bin/heliosdb-daily-backup.shStandardOutput=journalStandardError=journalCreate /etc/systemd/system/heliosdb-backup.timer:
[Unit]Description=HeliosDB Daily Backup TimerRequires=heliosdb-backup.service
[Timer]OnCalendar=dailyOnCalendar=02:00Persistent=true
[Install]WantedBy=timers.targetEnable and start:
sudo systemctl enable heliosdb-backup.timersudo systemctl start heliosdb-backup.timersudo systemctl list-timers # VerifyConfiguration-Based Scheduling
Add to config.toml:
[dump]# Auto-dump schedule (cron syntax)schedule = "0 */6 * * *" # Every 6 hours
# Auto-dump when WAL size exceeds threshold (1GB)wal_size_threshold = 1073741824
# Default compressioncompression = "zstd"compression_level = 3
# Backup directorybackup_dir = "/var/backups/heliosdb"
# Retention policyretention_days = 30
# Notification on failurenotify_email = "ops@example.com"Storage Planning
Size Estimation
Rule of Thumb:
- Uncompressed dump: ~90% of in-memory size
- Compressed dump (zstd): 20-40% of uncompressed
- Incremental dump: 5-15% of full dump (per day)
Example Calculation:
In-memory database: 10 GBUncompressed dump: 9 GBCompressed dump (zstd): 2.7 GB
Daily incremental: ~270 MBWeekly full: 2.7 GBMonthly storage (1 weekly + 28 daily): 2.7 GB + (28 * 270 MB) = 10.3 GBStorage Requirements Table
| Database Size | Compressed Dump | Daily Inc | Monthly Total |
|---|---|---|---|
| 100 MB | 30 MB | 3 MB | 120 MB |
| 1 GB | 300 MB | 30 MB | 1.2 GB |
| 10 GB | 3 GB | 300 MB | 12 GB |
| 100 GB | 30 GB | 3 GB | 120 GB |
| 1 TB | 300 GB | 30 GB | 1.2 TB |
Measuring Actual Dump Size
# Dry-run to estimate sizeheliosdb-nano dump --output /dev/null --dry-run --compress zstd
# Output:# Estimated dump size: 2.7 GB (compressed)# Estimated time: 120 seconds
# Actual dump with size trackingheliosdb-nano dump --output backup.heliodump --compress zstd --verbose | \ tee >(grep "Dump complete" | awk '{print $NF}')Performance Optimization
Dump Performance Tips
-
Use compression for I/O-bound systems:
Terminal window # Fast storage (SSD): Compression may slow downheliosdb-nano dump --output backup.heliodump --compress none# Slow storage (HDD/network): Use compressionheliosdb-nano dump --output backup.heliodump --compress lz4 -
Parallel table dumps:
Terminal window # Dump tables in parallel (requires scripting)tables=(users orders products sessions logs)for table in "${tables[@]}"; doheliosdb-nano dump --output "backup-$table.heliodump" --tables "$table" &donewait -
Tune batch size (config.toml):
[dump]batch_size = 10000 # Rows per batch (default: 1000)
Restore Performance Tips
-
Disable indexes during bulk restore:
-- Drop indexes before restoreDROP INDEX idx_users_email;DROP INDEX idx_orders_user_id;-- Restore-- (run heliosdb-nano restore command)-- Recreate indexesCREATE INDEX idx_users_email ON users(email);CREATE INDEX idx_orders_user_id ON orders(user_id); -
Use uncompressed dumps for repeated restores:
Terminal window # Create uncompressed dump for faster restoreheliosdb-nano dump --output fast-restore.heliodump --compress none -
Batch insert configuration:
[restore]batch_size = 50000 # Larger batches for bulk restore
Benchmarks
| Operation | Size | Time | Throughput |
|---|---|---|---|
| Dump (no compression) | 1 GB | 15s | 66 MB/s |
| Dump (lz4) | 1 GB → 400 MB | 22s | 45 MB/s |
| Dump (zstd) | 1 GB → 300 MB | 35s | 28 MB/s |
| Restore (no compression) | 1 GB | 25s | 40 MB/s |
| Restore (lz4) | 400 MB → 1 GB | 30s | 33 MB/s |
| Restore (zstd) | 300 MB → 1 GB | 40s | 25 MB/s |
Verification and Validation
Checksum Verification
Dump files include CRC32 checksums for integrity:
# Verify dump file integrityheliosdb-nano verify --input backup.heliodump
# Output:# Verifying backup.heliodump...# Magic: OK# Version: 3.1.0 (compatible)# Metadata checksum: OK# Data checksum: OK (CRC32: 0xA1B2C3D4)# Verification: PASSEDData Validation
Compare row counts after restore:
#!/bin/bashTABLES=("users" "orders" "products" "sessions")
for table in "${TABLES[@]}"; do # Count in dump DUMP_COUNT=$(heliosdb-nano inspect --input backup.heliodump --table "$table" | grep "Row count" | awk '{print $3}')
# Count in restored database RESTORE_COUNT=$(psql -h localhost -p 5432 -t -c "SELECT COUNT(*) FROM $table")
if [ "$DUMP_COUNT" -eq "$RESTORE_COUNT" ]; then echo "✓ $table: $RESTORE_COUNT rows (match)" else echo "✗ $table: MISMATCH (dump: $DUMP_COUNT, restore: $RESTORE_COUNT)" exit 1 fidone
echo "All tables validated successfully"Schema Validation
-- Compare schema after restoreSELECT table_name, column_name, data_typeFROM information_schema.columnsWHERE table_schema = 'public'ORDER BY table_name, ordinal_position;
-- Verify indexesSELECT tablename, indexname, indexdefFROM pg_indexesWHERE schemaname = 'public'ORDER BY tablename, indexname;Troubleshooting
Issue: Dump Fails with “Out of Memory”
Symptoms:
ERROR: cannot allocate memory for dump bufferSolutions:
-
Reduce batch size:
[dump]batch_size = 1000 # Smaller batches -
Dump tables individually:
Terminal window for table in $(heliosdb-nano tables); doheliosdb-nano dump --output "backup-$table.heliodump" --tables "$table"done -
Use streaming mode:
Terminal window heliosdb-nano dump --output backup.heliodump --streaming
Issue: Restore Fails with “Version Incompatible”
Symptoms:
ERROR: dump version 3.0.0 incompatible with HeliosDB 3.1.0Solutions:
-
Upgrade dump format:
Terminal window heliosdb-nano migrate-dump --input old.heliodump --output new.heliodump -
Use compatible version:
Terminal window # Download v3.0.0 binaryheliosdb-nano-v3.0.0 restore --input old.heliodump
Issue: Corrupted Dump File
Symptoms:
ERROR: checksum mismatch in dump fileERROR: unexpected end of fileSolutions:
-
Verify file integrity:
Terminal window heliosdb-nano verify --input backup.heliodump -
Attempt partial recovery:
Terminal window heliosdb-nano restore --input backup.heliodump --ignore-errors --partial -
Use previous backup:
Terminal window # Find most recent valid backupfor f in /backups/*.heliodump; doif heliosdb-nano verify --input "$f" > /dev/null 2>&1; thenecho "Valid: $f"fidone
Real-World Examples
Example 1: E-commerce Site Backup
Requirements:
- 24/7 uptime
- Point-in-time recovery within 1 hour
- 30-day retention
Solution:
# Daily full backup (2 AM, low traffic)0 2 * * * heliosdb-nano dump --output /backups/daily/full-$(date +\%Y\%m\%d).heliodump --compress zstd
# Hourly incremental0 * * * * heliosdb-nano dump --output /backups/hourly/inc-$(date +\%Y\%m\%d-\%H).heliodump --append --compress lz4
# Upload to S3 (real-time)*/5 * * * * aws s3 sync /backups/hourly s3://backups/hourly/
# Cleanup old backups0 3 * * * find /backups/daily -mtime +30 -deleteExample 2: Development Environment
Requirements:
- Fast reset to clean state
- Minimal storage
Solution:
# Create seed data dump onceheliosdb-nano dump --output /fixtures/seed-data.heliodump --compress zstd
# Reset before each test runheliosdb-nano restore --input /fixtures/seed-data.heliodump --mode cleanExample 3: Analytics Pipeline
Requirements:
- Export processed data daily
- Retain for 90 days
Solution:
#!/bin/bashDATE=$(date +%Y%m%d)
# Export analytics tablesheliosdb-nano dump \ --output "/analytics/exports/analytics-$DATE.heliodump" \ --tables "daily_metrics,user_aggregates,revenue_summary" \ --compress zstd \ --compression-level 9
# Upload to data warehouserclone copy "/analytics/exports/analytics-$DATE.heliodump" "s3:data-warehouse/analytics/"
# Cleanupfind /analytics/exports -mtime +90 -deleteSee Also
- In-Memory Mode Guide - In-memory database operations
- CLI Reference - Complete command reference
- Configuration Reference - Dump/restore settings
- Disaster Recovery Guide - DR best practices
- Performance Tuning - Optimization techniques
Version: 3.1.0 Last Updated: 2025-12-08 Maintained by: HeliosDB Team