Dream Mode and Reflection
WASP's reflection systems run during downtime to consolidate memories, update the self-model, and generate insights about recent activity.
Dream Mode (src/scheduler/dream.py)
Dream Mode is WASP's deep consolidation cycle. It activates automatically when conditions are met:
Activation Conditions
# DreamJob checks every hour:
inactive_hours = (now - last_user_message).total_seconds() / 3600
is_night = local_hour in range(1, 7) # 1 AM to 7 AM
should_dream = (
inactive_hours >= 2.0 and
(is_night or inactive_hours >= 4.0) and
not dreamed_in_last_6h
)
| Condition | Value |
|---|---|
| Minimum inactivity | 2 hours |
| Time window | 1 AM - 7 AM local, OR 4+ hours inactive |
| Cooldown | 6 hours minimum between dreams |
Dream Sequence
When Dream Mode activates:
-
Memory promotion —
PromotionEnginepromotes frequently-accessed episodic memories to semantic (long-term) memory. Memories with highuse_countbecome permanent facts. -
Knowledge Graph extraction — Recent conversations are batch-processed for entity/relationship extraction that may have been missed during real-time processing.
-
LLM reflection — The agent generates a reflection on recent conversations:
Prompt: "Reflect on the following memories from the last 24 hours.
What patterns do you notice? What can you learn?
Recent memories: [...]"The reflection is stored in
dream_log. -
Price prefetch — Current prices for all crypto assets in the Knowledge Graph are fetched and stored in
world_timelineassource=dream_prefetch. -
State update —
agent:dream_stateis updated in Redis with timestamp and summary.
Dream Log
Each dream is recorded in the dream_log table:
docker exec agent-postgres psql -U agent -d agent -c "
SELECT id, started_at, ended_at, memories_processed, reflection_summary
FROM dream_log
ORDER BY started_at DESC
LIMIT 5;
"
Dream State
# Check last dream state
docker exec agent-redis redis-cli GET agent:dream_state | python3 -m json.tool
Memory Reflection (scheduler/jobs.py — ReflectionJob)
The ReflectionJob runs every 6 hours and is lighter than Dream Mode:
- Retrieves recent episodic memories
- Asks the LLM to synthesize key insights
- Stores insights as semantic memories
- Updates the self-model with any new observations
Self-Model Updates (src/agent/self_model.py)
The self-model is updated after every message:
async def record_message_processed(
skill_calls: list[str],
success: bool,
domain: str,
chat_id: str
):
model = await load_self_model()
# Update weekly stats
model.weekly_stats["messages"] += 1
# Update skill success rates
for skill in skill_calls:
rates = model.skill_success_rates.setdefault(skill, {"success": 0, "total": 0})
rates["total"] += 1
if success:
rates["success"] += 1
await save_self_model(model)
Self-Model Structure
{
"strengths": ["web_search", "python_exec", "data_analysis"],
"known_failures": ["complex_math", "vision_fine_details"],
"user_preferences": ["concise responses", "no markdown in chat", "Spanish"],
"weekly_stats": {
"messages": 247,
"skills_called": 1893,
"goals_completed": 12
},
"skill_success_rates": {
"web_search": {"success": 423, "total": 445},
"python_exec": {"success": 189, "total": 213},
"browser": {"success": 98, "total": 135}
}
}
The self-model is backed up to /data/memory/self_model.json and survives Redis flushes.
Epistemic State Updates (src/agent/epistemic.py)
After every message, domain confidence is updated:
# Success → confidence +0.015 (symmetric adjustment)
# Failure → confidence -0.015
# Bounds: proportional to distance from 0/1 (prevents overconfidence)
async def record_outcome(domain: str, success: bool):
state = await load_epistemic()
current = state.confidence.get(domain, 0.5)
if success:
delta = 0.015 * (1 - current) # Diminishing returns near 1.0
else:
delta = -0.015 * current # Diminishing penalty near 0.0
state.confidence[domain] = max(0.05, min(0.98, current + delta))
await save_epistemic(state)
The symmetric ±0.015 adjustment prevents asymmetric drift (previous bug: +0.02/-0.04 caused systematic overconfidence reduction).
Procedure Abstraction (src/memory/procedural.py)
After complex multi-skill conversations, abstract_procedure() runs:
if len(skill_calls) >= 3 and len(unique_skills) >= 2:
# Worth abstracting
procedure = await llm_abstract_procedure(
task=original_objective,
steps=skill_calls,
context=conversation_summary,
)
await store_procedure(procedure, session)
Procedures are stored with trigger keywords. Future tasks with similar descriptions retrieve and inject them as hints.
Manual Reflection Trigger
# Trigger reflection immediately
curl -b cookies.txt -X POST https://agentwasp.com/api/scheduler/run/reflection
# Or dream mode
curl -b cookies.txt -X POST https://agentwasp.com/api/scheduler/run/dream
Introspection Command
/introspect
Returns a comprehensive report:
- Health score (0-100)
- Self-model summary
- Epistemic state
- Top skills by usage
- Recent goal history
- Integrity report findings