Skip to main content

Scaling

WASP is designed for single-server deployment with a multi-container architecture. This page covers resource requirements, performance tuning, and horizontal scaling considerations.

Resource Requirements

Minimum (Single User)

  • CPU: 2 cores
  • RAM: 4 GB
  • Disk: 20 GB
  • Network: 10 Mbps
  • CPU: 4 cores
  • RAM: 8-16 GB
  • Disk: 100 GB SSD
  • Network: 100 Mbps

With Local LLM (Ollama)

  • CPU: 8+ cores
  • RAM: 16-32 GB (model dependent)
  • GPU: Optional but recommended for large models
  • Additional disk: 10-80 GB for model weights

Memory Usage by Component

ComponentTypical RAMPeak RAM
agent-core400 MB1 GB
agent-redis100 MB256 MB (max configured)
agent-postgres200 MB500 MB
agent-telegram100 MB200 MB
agent-nginx20 MB50 MB
agent-broker50 MB100 MB
Chromium (browser skill)500 MB2 GB

The Chromium browser is the most memory-intensive component. The shm_size: 2gb in docker-compose ensures it has sufficient shared memory.

Disk Usage

PathTypical SizeGrowth Rate
/home/agent/data/postgres500 MB - 5 GB~1 MB/day
/home/agent/data/redis50-200 MBStable (LRU)
/home/agent/data/memory10-100 MB~5 MB/week
/home/agent/data/logs100 MB - 1 GB~10 MB/day
/home/agent/data/screenshotsVariableCleared manually
Ollama models4-70 GBPer model pulled

Performance Tuning

Increase Goal Concurrency

For servers with more CPU:

GOAL_MAX_CONCURRENT=5
AGENTS_MAX_CONCURRENT_STEPS=10

Adjust Tick Intervals

Faster ticks = more responsive goals, higher CPU:

GOAL_TICK_INTERVAL=10     # Default: 15s
AGENTS_TICK_INTERVAL=10 # Default: 15s

Redis Memory

If memories grow large, increase Redis max memory:

# docker-compose.yml
command: redis-server --maxmemory 512mb --maxmemory-policy allkeys-lru

PostgreSQL Tuning

For high audit log volume (250k+ entries):

# Add index on audit_log
docker exec agent-postgres psql -U agent -d agent -c "
CREATE INDEX IF NOT EXISTS idx_audit_log_created_at ON audit_log(created_at DESC);
CREATE INDEX IF NOT EXISTS idx_audit_log_skill_name ON audit_log(skill_name);
"

Clean old audit logs:

docker exec agent-postgres psql -U agent -d agent -c "
DELETE FROM audit_log WHERE created_at < NOW() - INTERVAL '90 days';
"

Horizontal Scaling Limitations

WASP is architected for single-node deployment:

Cannot scale:

  • agent-core — single event consumer, singleton scheduler
  • agent-redis — session state, stream consumer groups
  • agent-postgres — write-intensive, single primary

Can scale:

  • agent-telegram — but requires partitioned user routing
  • agent-nginx — stateless, can run multiple replicas behind LB

Multi-User Scaling

WASP supports multiple users via TELEGRAM_ALLOWED_USERS:

TELEGRAM_ALLOWED_USERS=user1_id,user2_id,user3_id

Each user gets:

  • Isolated chat history
  • Separate Knowledge Graph nodes (chat_id scoped)
  • Shared skill registry and model providers

For high user volumes (>10 concurrent active users), increase:

GOAL_MAX_CONCURRENT=10
AGENTS_MAX_ACTIVE=20
AGENTS_GLOBAL_TOKEN_BUDGET_PER_MINUTE=500000

Monitoring Under Load

Watch these metrics when under heavy load:

# CPU usage
docker stats agent-core --no-stream

# Redis memory
docker exec agent-redis redis-cli INFO memory | grep used_memory_human

# Postgres connections
docker exec agent-postgres psql -U agent -d agent -c "SELECT count(*) FROM pg_stat_activity;"

# Stream backlog (should be < 1000)
docker exec agent-redis redis-cli XLEN events:incoming

# Goal queue depth
docker exec agent-redis redis-cli HLEN goals

Backup and Recovery

Daily Backup

# Create backup
cd /home/agent && bash scripts/backup.sh

# Restore from backup
bash scripts/restore.sh /home/agent/data/backups/backup_2026_03_09.tar.gz

Manual Database Backup

docker exec agent-postgres pg_dump -U agent agent > /home/agent/data/backups/pg_backup_$(date +%Y%m%d).sql

Upgrading

# Pull new code
cd /home/agent && git pull

# Rebuild and restart
docker compose build agent-core
docker compose up -d agent-core

# Verify
docker compose logs agent-core --tail=20

Self-modifications applied via the self_improve skill are persisted in /data/src_patches/ and automatically re-applied after rebuild.