In-Memory Mode User Guide
In-Memory Mode User Guide
Version: 3.1.0 Status: Production-Ready Last Updated: 2025-12-08
Table of Contents
- Overview
- When to Use In-Memory Mode
- Getting Started
- Multi-User Setup
- Session Management
- Isolation Levels
- Performance Characteristics
- Resource Quotas
- Best Practices
- Example Scenarios
- Troubleshooting
Overview
In-memory mode allows HeliosDB Nano to operate entirely in RAM without persisting data to disk during normal operations. This mode provides:
- Ultra-fast performance: No disk I/O bottlenecks
- Full ACID guarantees: Complete transaction support with MVCC
- Multi-user concurrency: Multiple concurrent sessions with isolation
- User-triggered persistence: Manual dump/restore for data durability
- Session management: Per-user transaction contexts and resource limits
In-memory mode is ideal for:
- Development and testing environments
- Caching layers with periodic snapshots
- Real-time analytics with high throughput requirements
- Temporary data processing pipelines
- Session storage for web applications
When to Use In-Memory Mode
Use In-Memory Mode When:
-
Performance is Critical
- Sub-millisecond query latency required
- High transaction throughput (10,000+ TPS)
- Real-time data processing workloads
-
Data is Temporary
- Session data with short TTL
- Cache invalidation patterns
- Test data that can be regenerated
-
Controlled Persistence
- You want explicit control over when data is saved
- Scheduled backups are sufficient
- Point-in-time snapshots meet requirements
Use Persistent Mode When:
-
Data Durability is Critical
- Financial transactions
- User-generated content
- Audit logs and compliance data
-
Large Datasets
- Working set exceeds available RAM
- Cost of memory vs disk is prohibitive
-
Automatic Recovery Required
- Unplanned restarts must preserve data
- No manual intervention on startup
Getting Started
Starting In-Memory Server
# Start in-memory server on default port (5432)heliosdb-nano start --memory
# Start with custom port and bind addressheliosdb-nano start --memory --port 5433 --listen 0.0.0.0
# Start with configuration fileheliosdb-nano start --memory --config /path/to/config.tomlStarting In-Memory REPL
# Launch interactive REPL in memory-only modeheliosdb-nano repl --memory
# With custom configurationheliosdb-nano repl --memory --config ./config.toml
# With automatic dump on shutdown for persistenceheliosdb-nano repl --memory --dump-on-shutdown
# With custom dump file locationheliosdb-nano repl --memory --dump-on-shutdown --dump-file /backups/session_$(date +%s).sqlData Persistence in In-Memory Mode
While in-memory mode doesn’t persist data by default, you can enable persistence using the --dump-on-shutdown flag:
Option 1: Manual Dump During Session
heliosdb-nano repl --memory
# In REPL, when ready to save:heliosdb> \dump /path/to/backup.sqlheliosdb> \q # ExitOption 2: Automatic Dump on Exit
# Start REPL with automatic shutdown dumpheliosdb-nano repl --memory --dump-on-shutdown --dump-file mydb.sql
# Make changes, then exit - dump happens automaticallyheliosdb> CREATE TABLE users (id INT, name TEXT);heliosdb> INSERT INTO users VALUES (1, 'Alice');heliosdb> \q# Automatically dumps to mydb.sql
# Restore in new sessionheliosdb-nano repl --memory --execute "SOURCE mydb.sql"Option 3: Periodic Snapshots
# Combined approach: both periodic dumps and shutdown dumpheliosdb-nano repl --memory --dump-on-shutdown --dump-file session_backup.sql
# In REPL, periodically dump critical data:heliosdb> \dump /backups/hourly_$(date +\%s).sqlComparison of Persistence Methods:
| Method | Usage | When Data Saved | Restore Method |
|---|---|---|---|
Manual \dump | Explicit dump commands | When user runs command | Manual restore |
--dump-on-shutdown | Flag on startup | On process exit | Auto-source or manual |
| Server mode | Persistent flag | Continuous to disk | Automatic on startup |
| Periodic backups | Scripted dumps | On schedule | Manual restore from backup |
Embedded Mode (In-Process)
use heliosdb_nano::EmbeddedDatabase;
// Create in-memory database instancelet db = EmbeddedDatabase::new_in_memory()?;
// Execute SQLdb.execute("CREATE TABLE users (id INT PRIMARY KEY, name TEXT)")?;db.execute("INSERT INTO users VALUES (1, 'Alice')")?;
let results = db.query("SELECT * FROM users", &[])?;Multi-User Setup
Creating User Sessions
In-memory mode supports concurrent user sessions with independent transaction contexts:
-- Sessions are created automatically on connection-- Each connection gets a unique session ID
-- View active sessionsSELECT * FROM pg_stat_activity;
-- Example output:-- pid | usename | application_name | state | query_start-- ----|---------|------------------|--------|--------------- 101 | alice | psql | active | 2025-12-08 10:00:00-- 102 | bob | python | idle | 2025-12-08 10:01:30User Authentication
Configure authentication in config.toml:
[session]# Session timeout in secondstimeout_secs = 3600
# Maximum sessions per usermax_sessions_per_user = 10
# Default isolation leveldefault_isolation_level = "READ_COMMITTED"
[auth]# Authentication method: trust, password, or scram-sha-256method = "password"
# User credentials (for development - use external auth in production)[[auth.users]]username = "alice"password_hash = "$argon2id$v=19$m=4096,t=3,p=1$..."roles = ["read", "write"]
[[auth.users]]username = "bob"password_hash = "$argon2id$v=19$m=4096,t=3,p=1$..."roles = ["read"]Connecting as Different Users
# Connect via psqlpsql "postgresql://alice:password@localhost:5432/heliosdb"
# Connect via Pythonimport psycopg2conn = psycopg2.connect( host="localhost", port=5432, user="alice", password="password", database="heliosdb")Session Management
Session Lifecycle
- Connection: Client connects and authenticates
- Session Creation: System assigns unique session ID
- Transaction Execution: User executes queries within session context
- Session Timeout: Idle sessions terminated after
timeout_secs - Explicit Termination: Client disconnects or calls
\q
Session Isolation
Each session maintains:
- Independent transaction state: BEGIN/COMMIT/ROLLBACK per session
- Separate snapshot: MVCC snapshot based on isolation level
- Resource quotas: Per-user memory and connection limits
- Audit trail: All session actions logged
Session Monitoring
-- View all active sessionsSELECT pid, usename, state, query_start, queryFROM pg_stat_activityWHERE state != 'idle';
-- Count sessions per userSELECT usename, COUNT(*) as session_countFROM pg_stat_activityGROUP BY usename;
-- Check session resource usageSELECT pid, usename, pg_session_memory_usage(pid) as memory_bytes, xact_start as transaction_startFROM pg_stat_activity;Isolation Levels
HeliosDB Nano supports three PostgreSQL-compatible isolation levels:
1. READ COMMITTED (Default)
Guarantees:
- No dirty reads (uncommitted data)
- Fresh snapshot per SQL statement
- See latest committed data
Use Cases:
- Web applications with short transactions
- Interactive queries and reports
- Low-contention workloads
Example:
-- Session 1BEGIN;SELECT balance FROM accounts WHERE id = 1; -- Returns 1000
-- Session 2 (concurrent)BEGIN;UPDATE accounts SET balance = 1200 WHERE id = 1;COMMIT;
-- Session 1 (continued)SELECT balance FROM accounts WHERE id = 1; -- Returns 1200 (sees update)COMMIT;2. REPEATABLE READ
Guarantees:
- No dirty reads
- No non-repeatable reads
- Consistent snapshot throughout transaction
- May see phantom rows from concurrent inserts
Use Cases:
- Long-running reports requiring consistency
- Multi-step transactions reading same data
- Data exports and backups
Example:
-- Session 1SET TRANSACTION ISOLATION LEVEL REPEATABLE READ;BEGIN;SELECT balance FROM accounts WHERE id = 1; -- Returns 1000
-- Session 2 (concurrent)UPDATE accounts SET balance = 1200 WHERE id = 1;COMMIT;
-- Session 1 (continued)SELECT balance FROM accounts WHERE id = 1; -- Returns 1000 (snapshot isolated)COMMIT;3. SERIALIZABLE
Guarantees:
- Full serializability
- No dirty reads, non-repeatable reads, or phantom reads
- Transactions appear to execute sequentially
- May abort with serialization errors
Use Cases:
- Financial transactions requiring strict consistency
- Concurrent inventory management
- Critical business logic with conflicts
Example:
-- Session 1SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;BEGIN;SELECT balance FROM accounts WHERE id = 1; -- Returns 1000UPDATE accounts SET balance = balance - 100 WHERE id = 1;
-- Session 2 (concurrent)SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;BEGIN;SELECT balance FROM accounts WHERE id = 1; -- Returns 1000UPDATE accounts SET balance = balance - 200 WHERE id = 1;COMMIT;
-- Session 1 (continued)COMMIT; -- ERROR: could not serialize access due to concurrent updateSetting Isolation Levels
-- Per transactionBEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE;
-- For current sessionSET SESSION CHARACTERISTICS AS TRANSACTION ISOLATION LEVEL REPEATABLE READ;
-- In config.toml (default for all sessions)[session]default_isolation_level = "SERIALIZABLE"Performance Characteristics
Latency Benchmarks (v3.1.0)
| Operation | READ COMMITTED | REPEATABLE READ | SERIALIZABLE |
|---|---|---|---|
| Simple SELECT | 0.05ms | 0.08ms | 0.12ms |
| Single INSERT | 0.15ms | 0.18ms | 0.25ms |
| UPDATE (no conflicts) | 0.20ms | 0.22ms | 0.30ms |
| UPDATE (with conflicts) | 0.30ms | 0.35ms | 5-30ms (retry) |
| Transaction COMMIT | 0.10ms | 0.12ms | 0.50ms |
Throughput Benchmarks
| Workload | Throughput | Concurrency | Notes |
|---|---|---|---|
| Read-only queries | 50,000 QPS | 100 sessions | No lock contention |
| Mixed read/write | 20,000 TPS | 50 sessions | 80% reads, 20% writes |
| Write-heavy | 8,000 TPS | 20 sessions | High lock contention |
| Serializable TXNs | 3,000 TPS | 10 sessions | Conflict detection overhead |
Memory Usage
Base overhead: ~50MB (runtime + RocksDB)Per session: ~2MB (snapshot + buffers)Per active transaction: ~500KB (write buffer)Per 1M rows (100 bytes avg): ~100MB (uncompressed)Scaling Recommendations
| Sessions | RAM Required | Notes |
|---|---|---|
| 1-10 | 256MB | Development/testing |
| 10-100 | 1GB | Small applications |
| 100-1000 | 4GB | Medium workloads |
| 1000-10000 | 16GB+ | High concurrency |
Resource Quotas
Configuring Quotas
[resource_quotas]# Per-user memory limit (bytes, 0 = unlimited)memory_limit_per_user = 1073741824 # 1GB
# Per-user connection limitconnection_limit_per_user = 10
# Per-transaction timeout (seconds)transaction_timeout_secs = 300 # 5 minutes
# Lock acquisition timeout (milliseconds)lock_timeout_ms = 30000 # 30 secondsMonitoring Resource Usage
-- Check current usage vs limitsSELECT usename, COUNT(*) as connections, SUM(pg_session_memory_usage(pid)) as total_memory_bytesFROM pg_stat_activityGROUP BY usename;
-- Find sessions exceeding limitsSELECT pid, usename, state, pg_session_memory_usage(pid) / 1024 / 1024 as memory_mbFROM pg_stat_activityWHERE pg_session_memory_usage(pid) > 1073741824; -- 1GBQuota Enforcement
When quotas are exceeded:
- Connection limit: New connections rejected with error
- Memory limit: Transaction aborts with OOM error
- Transaction timeout: Transaction automatically rolled back
- Lock timeout: Lock acquisition fails, transaction can retry
Best Practices
1. Data Durability
# Schedule periodic dumps (cron)0 */6 * * * heliosdb-nano dump --output /backups/db-$(date +\%Y\%m\%d-\%H\%M).heliodump
# Check dirty state before shutdownSELECT pg_stat_get_dirty_bytes() > 0 as has_unsaved_changes;
# Dump if dirtyheliosdb-nano dump --output /backups/shutdown-$(date +\%Y\%m\%d-\%H\%M\%S).heliodump2. Transaction Management
-- Keep transactions shortBEGIN;UPDATE inventory SET quantity = quantity - 1 WHERE product_id = 123;COMMIT; -- Don't leave idle transactions open
-- Use appropriate isolation levelSET TRANSACTION ISOLATION LEVEL READ COMMITTED; -- For most casesSET TRANSACTION ISOLATION LEVEL SERIALIZABLE; -- Only when necessary3. Lock Contention
-- Avoid long-running transactions holding locksBEGIN;SELECT * FROM hot_table WHERE id = 1 FOR UPDATE; -- Acquires lock-- ... do work quickly ...COMMIT; -- Release lock ASAP
-- Use batch updates to reduce lock durationUPDATE orders SET status = 'shipped'WHERE id IN (SELECT id FROM orders WHERE status = 'pending' LIMIT 1000);4. Session Hygiene
-- Close idle sessionsSELECT pg_terminate_backend(pid)FROM pg_stat_activityWHERE state = 'idle in transaction' AND xact_start < NOW() - INTERVAL '5 minutes';
-- Monitor session countSELECT COUNT(*) FROM pg_stat_activity; -- Keep under max_connections5. Memory Management
// Use connection poolinguse r2d2_postgres::{PostgresConnectionManager, Pool};
let manager = PostgresConnectionManager::new( "postgresql://localhost:5432/heliosdb", postgres::NoTls);let pool = Pool::builder() .max_size(20) // Limit concurrent connections .build(manager)?;Example Scenarios
Scenario 1: Session-Based Web Application
Use Case: Store user session data with 30-minute TTL
use heliosdb_nano::EmbeddedDatabase;use std::time::SystemTime;
// Initialize in-memory databaselet db = EmbeddedDatabase::new_in_memory()?;
db.execute(" CREATE TABLE sessions ( session_id UUID PRIMARY KEY, user_id INT NOT NULL, data JSONB NOT NULL, created_at TIMESTAMP NOT NULL, expires_at TIMESTAMP NOT NULL )")?;
// Create index for expiration queriesdb.execute("CREATE INDEX idx_sessions_expires ON sessions(expires_at)")?;
// Insert sessiondb.execute(" INSERT INTO sessions VALUES ( gen_random_uuid(), 1234, '{\"cart\": [\"item1\", \"item2\"]}', NOW(), NOW() + INTERVAL '30 minutes' )")?;
// Cleanup expired sessions (run every minute)db.execute("DELETE FROM sessions WHERE expires_at < NOW()")?;
// Schedule daily dump// cron: 0 2 * * * heliosdb-nano dump --output /backups/sessions.heliodumpScenario 2: Real-Time Analytics Cache
Use Case: Cache aggregated metrics with periodic refresh
-- Create metrics cache tableCREATE TABLE metrics_cache ( metric_key TEXT PRIMARY KEY, metric_value NUMERIC, dimensions JSONB, last_updated TIMESTAMP);
-- Populate from source (runs every 5 minutes)INSERT INTO metrics_cacheSELECT 'daily_revenue' as metric_key, SUM(amount) as metric_value, jsonb_build_object('date', CURRENT_DATE) as dimensions, NOW() as last_updatedFROM ordersWHERE created_at >= CURRENT_DATEON CONFLICT (metric_key) DO UPDATESET metric_value = EXCLUDED.metric_value, last_updated = EXCLUDED.last_updated;
-- Fast reads from cache (< 1ms)SELECT metric_value FROM metrics_cache WHERE metric_key = 'daily_revenue';Scenario 3: Concurrent Inventory Management
Use Case: Handle concurrent product reservations with SERIALIZABLE isolation
-- Enable SERIALIZABLE for this sessionSET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
BEGIN;
-- Check availabilitySELECT quantity FROM inventory WHERE product_id = 123;
-- Reserve items (will fail if concurrent update conflicts)UPDATE inventorySET quantity = quantity - 1, reserved = reserved + 1WHERE product_id = 123 AND quantity > 0;
-- If successful, create reservationINSERT INTO reservations (product_id, user_id, created_at)VALUES (123, 456, NOW());
COMMIT; -- May fail with serialization error, client should retryScenario 4: Development/Testing Environment
Use Case: Fast test execution with clean state
#!/bin/bash# Start in-memory databaseheliosdb-nano start --memory --port 5555 &DB_PID=$!
# Wait for startupsleep 2
# Run testspytest tests/
# Cleanup (data automatically discarded)kill $DB_PIDScenario 5: Batch Data Processing Pipeline
Use Case: Load, transform, and export data
import psycopg2
# Connect to in-memory instanceconn = psycopg2.connect("postgresql://localhost:5432/heliosdb")cur = conn.cursor()
# Create staging tablecur.execute(""" CREATE TABLE staging_data ( id INT, raw_json JSONB )""")
# Bulk load datawith open('input.jsonl') as f: for line in f: cur.execute("INSERT INTO staging_data VALUES (nextval('seq'), %s)", (line,))
conn.commit()
# Transform datacur.execute(""" CREATE TABLE processed_data AS SELECT id, raw_json->>'field1' as field1, (raw_json->>'field2')::INT as field2 FROM staging_data WHERE raw_json->>'valid' = 'true'""")
conn.commit()
# Export resultscur.execute("SELECT * FROM processed_data")with open('output.csv', 'w') as f: # ... write results ...
# Optionally dump final state# heliosdb-nano dump --output pipeline_state.heliodump
conn.close()Troubleshooting
Issue: Out of Memory Errors
Symptoms:
ERROR: memory limit exceeded for user 'alice'ERROR: cannot allocate memory for transactionSolutions:
-
Increase per-user quota:
[resource_quotas]memory_limit_per_user = 2147483648 # 2GB -
Reduce concurrent sessions:
[session]max_sessions_per_user = 5 -
Batch large operations:
-- Instead of: INSERT INTO ... SELECT * FROM huge_table;-- Do this:INSERT INTO target SELECT * FROM huge_table LIMIT 10000;-- Repeat in batches
Issue: Serialization Errors
Symptoms:
ERROR: could not serialize access due to concurrent updateSolutions:
-
Use lower isolation level if acceptable:
SET TRANSACTION ISOLATION LEVEL READ COMMITTED; -
Implement retry logic:
max_retries = 3for attempt in range(max_retries):try:conn.execute("BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE")# ... transaction logic ...conn.commit()breakexcept psycopg2.extensions.TransactionRollbackError:if attempt == max_retries - 1:raisetime.sleep(0.1 * (2 ** attempt)) # Exponential backoff -
Reduce transaction duration:
BEGIN;-- Minimize time between BEGIN and COMMITUPDATE inventory SET quantity = quantity - 1 WHERE product_id = 123;COMMIT;
Issue: Lock Timeout Errors
Symptoms:
ERROR: lock timeout exceededERROR: could not obtain lock on rowSolutions:
-
Increase lock timeout:
[locks]timeout_ms = 60000 # 60 seconds -
Check for deadlocks:
SELECT * FROM pg_locks WHERE NOT granted; -
Identify blocking sessions:
SELECT blocking.pid AS blocking_pid,blocked.pid AS blocked_pid,blocking.query AS blocking_queryFROM pg_stat_activity blockingJOIN pg_locks blocked_locks ON blocking.pid = blocked_locks.pidJOIN pg_stat_activity blocked ON blocked_locks.pid = blocked.pidWHERE NOT blocked_locks.granted;
Issue: Connection Limit Exceeded
Symptoms:
ERROR: connection limit exceeded for user 'alice'ERROR: too many connectionsSolutions:
-
Close idle connections:
SELECT pg_terminate_backend(pid)FROM pg_stat_activityWHERE usename = 'alice' AND state = 'idle'; -
Use connection pooling:
from psycopg2 import poolconnection_pool = pool.SimpleConnectionPool(1, 10, dsn="...") -
Increase limit:
[resource_quotas]connection_limit_per_user = 20
Issue: Data Loss on Shutdown
Symptoms:
- Data disappears after restart
- No warning on shutdown
Solutions:
-
Enable shutdown warnings:
[dump]warn_on_dirty_shutdown = true -
Check dirty state before shutdown:
Terminal window heliosdb-nano status# Dirty state: YES (1234 uncommitted changes) -
Automated pre-shutdown dump:
Terminal window # systemd service file[Service]ExecStop=/usr/bin/heliosdb-nano dump --output /backups/shutdown.heliodumpExecStop=/bin/kill -SIGTERM $MAINPID
Issue: Performance Degradation
Symptoms:
- Queries slower than expected
- High lock contention
Solutions:
-
Monitor active transactions:
SELECT COUNT(*), state FROM pg_stat_activity GROUP BY state; -
Check for long-running transactions:
SELECT pid, usename, state, NOW() - xact_start as durationFROM pg_stat_activityWHERE xact_start IS NOT NULLORDER BY duration DESC; -
Analyze lock contention:
SELECT locktype, COUNT(*) FROM pg_locks GROUP BY locktype; -
Enable query profiling:
EXPLAIN ANALYZE SELECT ...;
See Also
- Dump and Restore Guide - Data persistence procedures
- CLI Reference - Command-line interface
- Configuration Reference - Configuration options
- Session Management API - Programmatic API
- Multi-User Transactions Architecture - System design
Version: 3.1.0 Last Updated: 2025-12-08 Maintained by: HeliosDB Team