Commit
·
93f44e2
1
Parent(s):
f89bd21
cache key error when user id changes -fixed task 1 31_10_2025 v6
Browse files- CACHE_INVALIDATION_FIX_IMPLEMENTED.md +201 -0
- CONTEXT_CACHE_USAGE_REPORT.md +218 -0
- INTERACTION_CONTEXT_FAILURE_ANALYSIS.md +262 -0
- LLM_BASED_TOPIC_EXTRACTION_IMPLEMENTATION.md +268 -0
- PATTERN_BASED_TOPIC_ANALYSIS_REVIEW.md +374 -0
- SESSION_CONTEXT_EVERY_TURN_IMPLEMENTATION.md +236 -0
- src/agents/intent_agent.py +8 -3
- src/agents/safety_agent.py +8 -3
- src/agents/skills_identification_agent.py +6 -2
- src/agents/synthesis_agent.py +8 -1
- src/context_manager.py +159 -25
- src/orchestrator_engine.py +203 -100
CACHE_INVALIDATION_FIX_IMPLEMENTED.md
ADDED
|
@@ -0,0 +1,201 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Cache Invalidation Fix - Implementation Summary
|
| 2 |
+
|
| 3 |
+
## Overview
|
| 4 |
+
|
| 5 |
+
Implemented targeted fixes for interaction context retrieval failures by adding proper cache invalidation. All changes **only affect cache management** and do not modify application functionality.
|
| 6 |
+
|
| 7 |
+
## Issues Fixed
|
| 8 |
+
|
| 9 |
+
### Issue 1: Primary - Cache Not Invalidated After Interaction Context Generation ✅
|
| 10 |
+
|
| 11 |
+
**Location**: `src/orchestrator_engine.py` (lines 449-457)
|
| 12 |
+
|
| 13 |
+
**Fix**: After successfully generating and storing an interaction context, invalidate both caches to force fresh retrieval on next request.
|
| 14 |
+
|
| 15 |
+
**Changes**:
|
| 16 |
+
- Call `invalidate_session_cache()` after `generate_interaction_context()`
|
| 17 |
+
- Also clear orchestrator-level `_context_cache`
|
| 18 |
+
- Added logging for cache invalidation
|
| 19 |
+
|
| 20 |
+
**Impact**: Next request will query database instead of using stale cache, retrieving newly generated interaction contexts.
|
| 21 |
+
|
| 22 |
+
### Issue 2: Secondary - Orchestrator Cache Not Cleared ✅
|
| 23 |
+
|
| 24 |
+
**Location**: `src/orchestrator_engine.py` (lines 453-457)
|
| 25 |
+
|
| 26 |
+
**Fix**: When invalidating session cache, also clear orchestrator-level cache (`_context_cache`).
|
| 27 |
+
|
| 28 |
+
**Changes**:
|
| 29 |
+
- Check if `_context_cache` exists and contains entry for session
|
| 30 |
+
- Delete orchestrator cache entry if present
|
| 31 |
+
- Added debug logging
|
| 32 |
+
|
| 33 |
+
**Impact**: Prevents orchestrator cache from returning stale data even if session cache is cleared.
|
| 34 |
+
|
| 35 |
+
### Issue 3: Tertiary - Context Reference Mismatch Detection ✅
|
| 36 |
+
|
| 37 |
+
**Location**: `src/orchestrator_engine.py` (lines 174-195)
|
| 38 |
+
|
| 39 |
+
**Fix**: Detect when users reference previous context but cache shows 0 contexts, then force cache refresh.
|
| 40 |
+
|
| 41 |
+
**Changes**:
|
| 42 |
+
- Detect phrases like "based on above inputs", "previous response", etc.
|
| 43 |
+
- If user references previous context but has 0 interaction contexts, invalidate both caches
|
| 44 |
+
- Force re-retrieval of context from database
|
| 45 |
+
- Log warning when mismatch detected and info when refreshed
|
| 46 |
+
|
| 47 |
+
**Impact**: When users explicitly reference previous context, system will refresh cache and retrieve stored interaction contexts.
|
| 48 |
+
|
| 49 |
+
### Issue 4: Cache Invalidation Method Added ✅
|
| 50 |
+
|
| 51 |
+
**Location**: `src/context_manager.py` (lines 1045-1053)
|
| 52 |
+
|
| 53 |
+
**Fix**: Added dedicated method for cache invalidation.
|
| 54 |
+
|
| 55 |
+
**Changes**:
|
| 56 |
+
- Added `invalidate_session_cache(session_id)` method
|
| 57 |
+
- Method safely checks if cache entry exists before deletion
|
| 58 |
+
- Added logging for cache invalidation
|
| 59 |
+
|
| 60 |
+
**Impact**: Provides clean API for cache invalidation that can be reused throughout codebase.
|
| 61 |
+
|
| 62 |
+
## Code Changes Summary
|
| 63 |
+
|
| 64 |
+
### File: `src/context_manager.py`
|
| 65 |
+
|
| 66 |
+
**Added Method** (lines 1045-1053):
|
| 67 |
+
```python
|
| 68 |
+
def invalidate_session_cache(self, session_id: str):
|
| 69 |
+
"""
|
| 70 |
+
Invalidate cached context for a session to force fresh retrieval
|
| 71 |
+
Only affects cache management - does not change application functionality
|
| 72 |
+
"""
|
| 73 |
+
session_cache_key = f"session_{session_id}"
|
| 74 |
+
if session_cache_key in self.session_cache:
|
| 75 |
+
del self.session_cache[session_cache_key]
|
| 76 |
+
logger.info(f"Cache invalidated for session {session_id} to ensure fresh context retrieval")
|
| 77 |
+
```
|
| 78 |
+
|
| 79 |
+
### File: `src/orchestrator_engine.py`
|
| 80 |
+
|
| 81 |
+
**Change 1** - Cache invalidation after interaction context generation (lines 449-457):
|
| 82 |
+
```python
|
| 83 |
+
# After generate_interaction_context()
|
| 84 |
+
# Invalidate caches to ensure fresh context retrieval on next request
|
| 85 |
+
# Only affects cache management - does not change application functionality
|
| 86 |
+
self.context_manager.invalidate_session_cache(session_id)
|
| 87 |
+
# Also clear orchestrator-level cache
|
| 88 |
+
if hasattr(self, '_context_cache'):
|
| 89 |
+
orchestrator_cache_key = f"context_{session_id}"
|
| 90 |
+
if orchestrator_cache_key in self._context_cache:
|
| 91 |
+
del self._context_cache[orchestrator_cache_key]
|
| 92 |
+
logger.debug(f"Orchestrator cache invalidated for session {session_id}")
|
| 93 |
+
```
|
| 94 |
+
|
| 95 |
+
**Change 2** - Context reference mismatch detection (lines 174-195):
|
| 96 |
+
```python
|
| 97 |
+
# Detect context reference mismatches and force cache refresh if needed
|
| 98 |
+
# Only affects cache management - does not change application functionality
|
| 99 |
+
user_input_lower = user_input.lower()
|
| 100 |
+
references_previous = any(phrase in user_input_lower for phrase in [
|
| 101 |
+
'based on above', 'based on previous', 'above inputs', 'previous response',
|
| 102 |
+
'last response', 'earlier', 'before', 'mentioned above', 'as discussed',
|
| 103 |
+
'from above', 'from previous', 'the above', 'the previous'
|
| 104 |
+
])
|
| 105 |
+
|
| 106 |
+
interaction_contexts_count = len(context.get('interaction_contexts', []))
|
| 107 |
+
if references_previous and interaction_contexts_count == 0:
|
| 108 |
+
logger.warning(f"User references previous context but cache shows 0 contexts - forcing cache refresh")
|
| 109 |
+
# Invalidate both caches and re-retrieve
|
| 110 |
+
self.context_manager.invalidate_session_cache(session_id)
|
| 111 |
+
if hasattr(self, '_context_cache'):
|
| 112 |
+
orchestrator_cache_key = f"context_{session_id}"
|
| 113 |
+
if orchestrator_cache_key in self._context_cache:
|
| 114 |
+
del self._context_cache[orchestrator_cache_key]
|
| 115 |
+
# Force fresh context retrieval
|
| 116 |
+
context = await self._get_or_create_context(session_id, user_input, user_id)
|
| 117 |
+
interaction_contexts_count = len(context.get('interaction_contexts', []))
|
| 118 |
+
logger.info(f"Context refreshed after cache invalidation: {interaction_contexts_count} interaction contexts")
|
| 119 |
+
```
|
| 120 |
+
|
| 121 |
+
## Impact Assessment
|
| 122 |
+
|
| 123 |
+
### Application Functionality
|
| 124 |
+
- **NO CHANGES**: All existing functionality preserved
|
| 125 |
+
- Only cache management logic modified
|
| 126 |
+
- Database operations unchanged
|
| 127 |
+
- Agent execution unchanged
|
| 128 |
+
- Response generation unchanged
|
| 129 |
+
|
| 130 |
+
### Cache Behavior
|
| 131 |
+
- **IMPROVED**: Cache now invalidated at appropriate times
|
| 132 |
+
- Fresh data retrieved when needed
|
| 133 |
+
- Stale cache no longer prevents context retrieval
|
| 134 |
+
|
| 135 |
+
### Performance
|
| 136 |
+
- **MINIMAL IMPACT**: Cache invalidation is O(1) operation
|
| 137 |
+
- May result in one extra database query per request (only when cache invalidated)
|
| 138 |
+
- Trade-off: Better accuracy vs. minimal performance cost
|
| 139 |
+
|
| 140 |
+
## Expected Behavior After Fix
|
| 141 |
+
|
| 142 |
+
### Scenario 1: Normal Request Flow
|
| 143 |
+
```
|
| 144 |
+
Request 1: "Tell me about Excel handling"
|
| 145 |
+
→ Generate response
|
| 146 |
+
→ Store interaction context in DB
|
| 147 |
+
→ Invalidate cache ✅
|
| 148 |
+
Request 2: "Create a prototype"
|
| 149 |
+
→ Cache miss (invalidated)
|
| 150 |
+
→ Query database
|
| 151 |
+
→ Retrieve interaction context from DB ✅
|
| 152 |
+
→ Use context for response generation
|
| 153 |
+
```
|
| 154 |
+
|
| 155 |
+
### Scenario 2: Context Reference Detection
|
| 156 |
+
```
|
| 157 |
+
Request 1: "Tell me about Excel handling"
|
| 158 |
+
→ Generate response
|
| 159 |
+
→ Store interaction context in DB
|
| 160 |
+
→ Invalidate cache ✅
|
| 161 |
+
Request 2: "Based on above inputs, create prototype"
|
| 162 |
+
→ Cache miss (invalidated) OR cache hit with 0 contexts
|
| 163 |
+
→ Detect "based on above" reference ✅
|
| 164 |
+
→ Force cache invalidation ✅
|
| 165 |
+
→ Query database
|
| 166 |
+
→ Retrieve interaction context from DB ✅
|
| 167 |
+
→ Use context for response generation
|
| 168 |
+
```
|
| 169 |
+
|
| 170 |
+
## Testing Recommendations
|
| 171 |
+
|
| 172 |
+
1. **Test Normal Flow**:
|
| 173 |
+
- Send request 1 → Verify cache invalidated in logs
|
| 174 |
+
- Send request 2 → Verify interaction context retrieved (> 0)
|
| 175 |
+
- Verify response correctly references previous discussion
|
| 176 |
+
|
| 177 |
+
2. **Test Context Reference Detection**:
|
| 178 |
+
- Send request 1
|
| 179 |
+
- Send request 2 with "based on above inputs"
|
| 180 |
+
- Verify log: "forcing cache refresh"
|
| 181 |
+
- Verify log: "Context refreshed after cache invalidation: X interaction contexts"
|
| 182 |
+
- Verify response correctly references previous discussion
|
| 183 |
+
|
| 184 |
+
3. **Verify No Functionality Regression**:
|
| 185 |
+
- All existing features work as before
|
| 186 |
+
- Response quality unchanged
|
| 187 |
+
- Performance acceptable (may have 1 extra DB query)
|
| 188 |
+
|
| 189 |
+
## Logging Changes
|
| 190 |
+
|
| 191 |
+
New log entries to monitor:
|
| 192 |
+
- `"Cache invalidated for session {session_id} to ensure fresh context retrieval"`
|
| 193 |
+
- `"Orchestrator cache invalidated for session {session_id}"` (debug level)
|
| 194 |
+
- `"User references previous context but cache shows 0 contexts - forcing cache refresh"` (warning)
|
| 195 |
+
- `"Context refreshed after cache invalidation: X interaction contexts"`
|
| 196 |
+
|
| 197 |
+
## Related Documentation
|
| 198 |
+
|
| 199 |
+
- See `INTERACTION_CONTEXT_FAILURE_ANALYSIS.md` for detailed root cause analysis
|
| 200 |
+
- Changes only affect cache management as identified in analysis
|
| 201 |
+
|
CONTEXT_CACHE_USAGE_REPORT.md
ADDED
|
@@ -0,0 +1,218 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Context Cache Usage Report
|
| 2 |
+
|
| 3 |
+
## Executive Summary
|
| 4 |
+
|
| 5 |
+
This report analyzes how agents and components access interaction, session, and user contexts. The analysis confirms that agents use cache-only context, but identifies one area where session context generation queries the database directly (which is acceptable since it only runs at session end).
|
| 6 |
+
|
| 7 |
+
## Context Access Flow
|
| 8 |
+
|
| 9 |
+
### 1. Context Retrieval Pattern
|
| 10 |
+
|
| 11 |
+
**Orchestrator → Context Manager → Cache → Agents**
|
| 12 |
+
|
| 13 |
+
```
|
| 14 |
+
orchestrator.process_request()
|
| 15 |
+
└─> _get_or_create_context()
|
| 16 |
+
└─> context_manager.manage_context()
|
| 17 |
+
├─> Check session_cache (cache-first)
|
| 18 |
+
├─> Check user_cache (cache-first)
|
| 19 |
+
├─> Only queries DB if cache miss
|
| 20 |
+
└─> Returns cached context to orchestrator
|
| 21 |
+
└─> Passes to agents
|
| 22 |
+
```
|
| 23 |
+
|
| 24 |
+
### 2. Cache-Only Access Verification
|
| 25 |
+
|
| 26 |
+
All agents receive context from orchestrator, which gets it from cache:
|
| 27 |
+
|
| 28 |
+
#### Intent Agent (`src/agents/intent_agent.py`)
|
| 29 |
+
- **Context Source**: From orchestrator (line 201-204)
|
| 30 |
+
- **Context Access**:
|
| 31 |
+
- Uses `combined_context` (pre-formatted from cache)
|
| 32 |
+
- Falls back to `interaction_contexts` and `user_context` from cache
|
| 33 |
+
- **Status**: ✅ Cache-only
|
| 34 |
+
|
| 35 |
+
#### Skills Identification Agent (`src/agents/skills_identification_agent.py`)
|
| 36 |
+
- **Context Source**: From orchestrator (line 218-221)
|
| 37 |
+
- **Context Access**:
|
| 38 |
+
- Uses `user_context` from cache (line 230)
|
| 39 |
+
- Uses `interaction_contexts` from cache (line 231)
|
| 40 |
+
- **Status**: ✅ Cache-only
|
| 41 |
+
|
| 42 |
+
#### Synthesis Agent (`src/agents/synthesis_agent.py`)
|
| 43 |
+
- **Context Source**: From orchestrator (line 283-287)
|
| 44 |
+
- **Context Access**:
|
| 45 |
+
- Uses `interaction_contexts` from cache (lines 299, 358, 550)
|
| 46 |
+
- Uses `user_context` from cache (implied in context dict)
|
| 47 |
+
- **Status**: ✅ Cache-only
|
| 48 |
+
|
| 49 |
+
#### Safety Agent (`src/agents/safety_agent.py`)
|
| 50 |
+
- **Context Source**: From orchestrator (line 314-317)
|
| 51 |
+
- **Context Access**:
|
| 52 |
+
- Uses `user_context` from cache (line 158)
|
| 53 |
+
- Uses `interaction_contexts` from cache (line 163)
|
| 54 |
+
- **Status**: ✅ Cache-only
|
| 55 |
+
|
| 56 |
+
## Context Manager Cache Behavior
|
| 57 |
+
|
| 58 |
+
### Session Context (Interaction Contexts)
|
| 59 |
+
|
| 60 |
+
**Location**: `src/context_manager.py` - `manage_context()` (lines 235-289)
|
| 61 |
+
|
| 62 |
+
**Cache Strategy**:
|
| 63 |
+
1. **Check cache first** (line 247): `session_context = self._get_from_memory_cache(session_cache_key)`
|
| 64 |
+
2. **Query database only if cache miss** (line 260-262): Only when `not session_context`
|
| 65 |
+
3. **Cache immediately after DB query** (line 265): Warm cache with fresh data
|
| 66 |
+
4. **Update cache synchronously** (line 444): When interaction context generated, cache updated immediately via `_update_cache_with_interaction_context()`
|
| 67 |
+
|
| 68 |
+
**Result**: ✅ **Cache-first, DB only on miss**
|
| 69 |
+
|
| 70 |
+
### User Context
|
| 71 |
+
|
| 72 |
+
**Location**: `src/context_manager.py` - `manage_context()` (lines 267-281)
|
| 73 |
+
|
| 74 |
+
**Cache Strategy**:
|
| 75 |
+
1. **Check cache first** (line 258): `user_context = self._get_from_memory_cache(user_cache_key)`
|
| 76 |
+
2. **Query database only if not cached** (line 269): `if not user_context or not user_context.get("user_context_loaded")`
|
| 77 |
+
3. **Cache after first load** (line 277): `self._warm_memory_cache(user_cache_key, user_context)`
|
| 78 |
+
4. **Never queries DB again** (line 279-281): Uses cached user context for all subsequent requests
|
| 79 |
+
|
| 80 |
+
**Result**: ✅ **Load once, cache forever (no DB queries after initial load)**
|
| 81 |
+
|
| 82 |
+
### Interaction Contexts
|
| 83 |
+
|
| 84 |
+
**Location**: `src/context_manager.py` - `_update_cache_with_interaction_context()` (lines 770-809)
|
| 85 |
+
|
| 86 |
+
**Cache Strategy**:
|
| 87 |
+
1. **Updated synchronously with database** (line 444): Called immediately after DB insert
|
| 88 |
+
2. **Adds to cached list** (line 788): Updates in-memory cache without DB query
|
| 89 |
+
3. **Maintains most recent 20** (line 789): Matches database query limit
|
| 90 |
+
|
| 91 |
+
**Result**: ✅ **Cache updated when DB updated, no DB query needed**
|
| 92 |
+
|
| 93 |
+
## Session Context Analysis
|
| 94 |
+
|
| 95 |
+
### Session Context Generation
|
| 96 |
+
|
| 97 |
+
**Location**: `src/context_manager.py` - `generate_session_context()` (lines 463-532)
|
| 98 |
+
|
| 99 |
+
**Purpose**: Generate 100-token summary of entire session for long-term user persona building
|
| 100 |
+
|
| 101 |
+
**When Called**: Only at session end via `end_session()` (line 540)
|
| 102 |
+
|
| 103 |
+
**Database Access**:
|
| 104 |
+
- ✅ **Queries database directly** (lines 469-479) to get all interaction contexts for session
|
| 105 |
+
- **Why This Is Acceptable**:
|
| 106 |
+
1. Only called once per session (at end)
|
| 107 |
+
2. Not used by agents during conversation
|
| 108 |
+
3. Used for generating user persona summary (long-term context)
|
| 109 |
+
4. Cache doesn't need to maintain full session summary during conversation
|
| 110 |
+
|
| 111 |
+
**Result**: ✅ **Acceptable - Only runs at session end, not during conversation**
|
| 112 |
+
|
| 113 |
+
### Session Context Usage
|
| 114 |
+
|
| 115 |
+
**Finding**: Session context is **NOT used by agents during conversation**.
|
| 116 |
+
|
| 117 |
+
**Evidence**:
|
| 118 |
+
- No agent accesses `session_context` or `session_summary` fields
|
| 119 |
+
- Session context is only used for generating user persona (`get_user_context()`)
|
| 120 |
+
- User persona is generated from session contexts across all sessions (line 326)
|
| 121 |
+
|
| 122 |
+
**Current Usage**:
|
| 123 |
+
1. Generated at session end (`generate_session_context()`)
|
| 124 |
+
2. Stored in `session_contexts` table
|
| 125 |
+
3. Retrieved when generating user persona (`get_user_context()` line 326)
|
| 126 |
+
4. User persona is cached after first load (lines 277-281)
|
| 127 |
+
|
| 128 |
+
**Recommendation**: ✅ **Current usage is correct - session context should not be accessed during conversation**
|
| 129 |
+
|
| 130 |
+
## Cache Update Synchronization
|
| 131 |
+
|
| 132 |
+
### Interaction Context Updates
|
| 133 |
+
|
| 134 |
+
**Location**: `src/context_manager.py` - `generate_interaction_context()` (lines 422-447)
|
| 135 |
+
|
| 136 |
+
**Flow**:
|
| 137 |
+
1. Generate interaction summary via LLM
|
| 138 |
+
2. Store in database (line 427-438)
|
| 139 |
+
3. **Immediately update cache** (line 444): `_update_cache_with_interaction_context()`
|
| 140 |
+
4. Cache contains new interaction context before next request
|
| 141 |
+
|
| 142 |
+
**Result**: ✅ **Cache and database synchronized at write time**
|
| 143 |
+
|
| 144 |
+
## Verification Summary
|
| 145 |
+
|
| 146 |
+
### ✅ All Agents Use Cache-Only Context
|
| 147 |
+
|
| 148 |
+
| Agent | Context Source | Access Method | Cache-Only? |
|
| 149 |
+
|-------|---------------|---------------|-------------|
|
| 150 |
+
| Intent Agent | Orchestrator | `combined_context`, `interaction_contexts`, `user_context` | ✅ Yes |
|
| 151 |
+
| Skills Agent | Orchestrator | `user_context`, `interaction_contexts` | ✅ Yes |
|
| 152 |
+
| Synthesis Agent | Orchestrator | `interaction_contexts`, `user_context` | ✅ Yes |
|
| 153 |
+
| Safety Agent | Orchestrator | `user_context`, `interaction_contexts` | ✅ Yes |
|
| 154 |
+
|
| 155 |
+
### ✅ Context Manager Cache Strategy
|
| 156 |
+
|
| 157 |
+
| Context Type | Cache Strategy | DB Queries |
|
| 158 |
+
|-------------|----------------|------------|
|
| 159 |
+
| Interaction Contexts | Cache-first, updated on write | Only on cache miss |
|
| 160 |
+
| User Context | Load once, cache forever | Only on first load |
|
| 161 |
+
| Session Context | Generated at session end | Only at session end (acceptable) |
|
| 162 |
+
|
| 163 |
+
### ✅ Cache Synchronization
|
| 164 |
+
|
| 165 |
+
| Operation | Cache Update | Timing |
|
| 166 |
+
|-----------|--------------|--------|
|
| 167 |
+
| Interaction Context Generated | ✅ Immediate | Same time as DB write |
|
| 168 |
+
| Session Context Generated | ✅ Not needed | Only at session end |
|
| 169 |
+
| User Context Loaded | ✅ Immediate | After first DB query |
|
| 170 |
+
|
| 171 |
+
## Findings
|
| 172 |
+
|
| 173 |
+
### ✅ Correct Behaviors
|
| 174 |
+
|
| 175 |
+
1. **Agents receive context from cache**: All agents get context from orchestrator, which retrieves from cache
|
| 176 |
+
2. **Cache-first retrieval**: Context manager checks cache before querying database
|
| 177 |
+
3. **User context cached forever**: Loaded once, never queries database again
|
| 178 |
+
4. **Interaction contexts updated synchronously**: Cache updated when database is updated
|
| 179 |
+
5. **Session context properly scoped**: Only generated at session end, not used during conversation
|
| 180 |
+
|
| 181 |
+
### ⚠️ Session Context Notes
|
| 182 |
+
|
| 183 |
+
1. **Session context generation queries database**: `generate_session_context()` directly queries `interaction_contexts` table (lines 473-479)
|
| 184 |
+
- **Status**: ✅ Acceptable - Only called at session end
|
| 185 |
+
- **Impact**: None - Not used during conversation flow
|
| 186 |
+
- **Recommendation**: No change needed
|
| 187 |
+
|
| 188 |
+
2. **Session context not in cache during conversation**: Session summaries are not cached during conversation
|
| 189 |
+
- **Status**: ✅ Correct behavior
|
| 190 |
+
- **Reason**: Session summaries are 100-token summaries of entire session, not needed during conversation
|
| 191 |
+
- **Usage**: Only used for generating user persona (long-term context)
|
| 192 |
+
|
| 193 |
+
## Recommendations
|
| 194 |
+
|
| 195 |
+
### ✅ No Changes Needed
|
| 196 |
+
|
| 197 |
+
All agents and components correctly use cache-only context during conversation. The only database query during conversation flow is:
|
| 198 |
+
- Initial cache miss (acceptable - only happens once per session or on cache expiration)
|
| 199 |
+
- User context first load (acceptable - only happens once per user)
|
| 200 |
+
|
| 201 |
+
Session context generation queries database, but this is acceptable because:
|
| 202 |
+
1. Only runs at session end
|
| 203 |
+
2. Not used by agents during conversation
|
| 204 |
+
3. Used for long-term user persona building
|
| 205 |
+
|
| 206 |
+
## Conclusion
|
| 207 |
+
|
| 208 |
+
**Status**: ✅ **All agents and components correctly use cache-only context**
|
| 209 |
+
|
| 210 |
+
The system follows a cache-first strategy where:
|
| 211 |
+
- Context is retrieved from cache
|
| 212 |
+
- Database is only queried on cache miss (first request or cache expiration)
|
| 213 |
+
- Cache is updated immediately when database is updated
|
| 214 |
+
- User context is loaded once and cached forever
|
| 215 |
+
- Session context generation is properly scoped (session end only)
|
| 216 |
+
|
| 217 |
+
No changes required - system is working as designed.
|
| 218 |
+
|
INTERACTION_CONTEXT_FAILURE_ANALYSIS.md
ADDED
|
@@ -0,0 +1,262 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Interaction Context Retrieval Failure - Root Cause Analysis
|
| 2 |
+
|
| 3 |
+
## Executive Summary
|
| 4 |
+
|
| 5 |
+
Interaction contexts are being **stored correctly** in the database, but are **not being retrieved** on subsequent requests due to a cache invalidation failure. The system returns stale cached context that doesn't include newly generated interaction contexts.
|
| 6 |
+
|
| 7 |
+
## Problem Statement
|
| 8 |
+
|
| 9 |
+
When a user submits a request referencing previous context (e.g., "based on above inputs"), the system reports `Context retrieved: 0 interaction contexts`, causing:
|
| 10 |
+
- Loss of conversation continuity
|
| 11 |
+
- Responses generated for wrong topics
|
| 12 |
+
- Previous interaction context unavailable to agents
|
| 13 |
+
|
| 14 |
+
## Root Cause Analysis
|
| 15 |
+
|
| 16 |
+
### The Caching Flow
|
| 17 |
+
|
| 18 |
+
The system uses a two-tier caching mechanism:
|
| 19 |
+
|
| 20 |
+
1. **Context Manager Cache** (`src/context_manager.py`):
|
| 21 |
+
- Key: `session_{session_id}`
|
| 22 |
+
- Storage: `self.session_cache` dictionary
|
| 23 |
+
- Purpose: Cache session context to avoid database queries
|
| 24 |
+
|
| 25 |
+
2. **Orchestrator Cache** (`src/orchestrator_engine.py`):
|
| 26 |
+
- Key: `context_{session_id}`
|
| 27 |
+
- Storage: `self._context_cache` dictionary
|
| 28 |
+
- TTL: 5 seconds
|
| 29 |
+
- Purpose: Prevent rapid repeated context retrieval within same request processing
|
| 30 |
+
|
| 31 |
+
### The Failure Sequence
|
| 32 |
+
|
| 33 |
+
#### **First Request (Working - Context Storage)**:
|
| 34 |
+
```
|
| 35 |
+
1. User: "Tell me about Excel handling"
|
| 36 |
+
2. orchestrator.process_request() called
|
| 37 |
+
3. _get_or_create_context() checks orchestrator cache → MISS (empty)
|
| 38 |
+
4. Calls context_manager.manage_context()
|
| 39 |
+
5. manage_context() checks session_cache → MISS (empty)
|
| 40 |
+
6. Calls _retrieve_from_db()
|
| 41 |
+
7. Database query: SELECT interaction_summary FROM interaction_contexts WHERE session_id = ?
|
| 42 |
+
→ Returns 0 rows (new session)
|
| 43 |
+
8. Returns context: { interaction_contexts: [] }
|
| 44 |
+
9. Caches in session_cache: session_cache["session_cca279a4"] = { interaction_contexts: [] }
|
| 45 |
+
10. Response generated about Excel handling
|
| 46 |
+
11. generate_interaction_context() called
|
| 47 |
+
12. LLM generates 50-token summary
|
| 48 |
+
13. Database INSERT: INSERT INTO interaction_contexts (interaction_id, session_id, ...)
|
| 49 |
+
→ ✅ SUCCESS: Interaction context stored in database
|
| 50 |
+
14. **CRITICAL MISSING STEP**: Cache NOT invalidated
|
| 51 |
+
```
|
| 52 |
+
|
| 53 |
+
#### **Second Request (Broken - Context Retrieval)**:
|
| 54 |
+
```
|
| 55 |
+
1. User: "Based on above inputs, create a prototype"
|
| 56 |
+
2. orchestrator.process_request() called
|
| 57 |
+
3. _get_or_create_context() checks orchestrator cache:
|
| 58 |
+
- If < 5 seconds old → Returns cached context (from step 1)
|
| 59 |
+
- OR continues to step 4
|
| 60 |
+
4. Calls context_manager.manage_context()
|
| 61 |
+
5. manage_context() checks session_cache:
|
| 62 |
+
session_cache.get("session_cca279a4")
|
| 63 |
+
→ ✅ CACHE HIT: Returns cached context from first request
|
| 64 |
+
→ Contains: { interaction_contexts: [] }
|
| 65 |
+
6. **NEVER queries database** because cache hit
|
| 66 |
+
7. Context returned with 0 interaction contexts
|
| 67 |
+
8. Logs show: "Context retrieved: 0 interaction contexts"
|
| 68 |
+
9. Intent agent receives empty context
|
| 69 |
+
10. Skills agent analyzes wrong topic
|
| 70 |
+
11. Response generated for wrong context (story generation, not Excel)
|
| 71 |
+
```
|
| 72 |
+
|
| 73 |
+
### Root Cause Identified
|
| 74 |
+
|
| 75 |
+
**PRIMARY ISSUE**: Cache Invalidation Failure
|
| 76 |
+
|
| 77 |
+
After `generate_interaction_context()` successfully stores an interaction context in the database, **the cache is never invalidated**. This causes:
|
| 78 |
+
|
| 79 |
+
1. **First Request**: Context cached with `interaction_contexts = []`
|
| 80 |
+
2. **Interaction Context Generated**: Stored in database ✅
|
| 81 |
+
3. **Cache Not Cleared**: `session_cache["session_{session_id}"]` still contains old context
|
| 82 |
+
4. **Second Request**: Cache hit returns stale context with 0 interaction contexts
|
| 83 |
+
5. **Database Never Queried**: Cache check happens before database query
|
| 84 |
+
|
| 85 |
+
**Location of Issue**:
|
| 86 |
+
- File: `src/orchestrator_engine.py`
|
| 87 |
+
- Method: `process_request()`
|
| 88 |
+
- Lines: 442-450 (after `generate_interaction_context()` call)
|
| 89 |
+
- **Missing**: Cache invalidation after interaction context generation
|
| 90 |
+
|
| 91 |
+
### Secondary Issues
|
| 92 |
+
|
| 93 |
+
#### Issue 2: Orchestrator-Level Cache Also Not Cleared
|
| 94 |
+
|
| 95 |
+
The orchestrator maintains its own cache (`_context_cache`) with a 5-second TTL. If requests come within 5 seconds:
|
| 96 |
+
|
| 97 |
+
- **Orchestrator cache hit**: Returns cached context immediately
|
| 98 |
+
- **Context manager never called**: Never checks session_cache or database
|
| 99 |
+
- **Result**: Even if session_cache were cleared, orchestrator cache would still return stale data
|
| 100 |
+
|
| 101 |
+
**Location**:
|
| 102 |
+
- File: `src/orchestrator_engine.py`
|
| 103 |
+
- Method: `_get_or_create_context()`
|
| 104 |
+
- Lines: 89-93
|
| 105 |
+
|
| 106 |
+
#### Issue 3: No Detection of Context Reference Mismatches
|
| 107 |
+
|
| 108 |
+
When a user explicitly references previous context (e.g., "based on above inputs"), but the system has 0 interaction contexts, there's no mechanism to:
|
| 109 |
+
|
| 110 |
+
1. Detect the mismatch
|
| 111 |
+
2. Force cache invalidation
|
| 112 |
+
3. Re-query the database
|
| 113 |
+
4. Warn about potential context loss
|
| 114 |
+
|
| 115 |
+
**Location**:
|
| 116 |
+
- File: `src/orchestrator_engine.py`
|
| 117 |
+
- Method: `process_request()`
|
| 118 |
+
- Lines: 172-174 (context retrieval happens, but no validation)
|
| 119 |
+
|
| 120 |
+
## Code Flow Analysis
|
| 121 |
+
|
| 122 |
+
### Storage Flow (Working)
|
| 123 |
+
|
| 124 |
+
```
|
| 125 |
+
orchestrator.process_request()
|
| 126 |
+
└─> generate_interaction_context()
|
| 127 |
+
└─> llm_router.route_inference() → Generate summary
|
| 128 |
+
└─> Database INSERT → Store in interaction_contexts table
|
| 129 |
+
└─> ✅ SUCCESS: Stored in database
|
| 130 |
+
└─> ❌ MISSING: Cache invalidation
|
| 131 |
+
```
|
| 132 |
+
|
| 133 |
+
### Retrieval Flow (Broken)
|
| 134 |
+
|
| 135 |
+
```
|
| 136 |
+
orchestrator.process_request()
|
| 137 |
+
└─> _get_or_create_context()
|
| 138 |
+
├─> Check orchestrator cache (5s TTL)
|
| 139 |
+
│ └─> If hit: Return cached (may be stale)
|
| 140 |
+
└─> manage_context()
|
| 141 |
+
├─> Check session_cache
|
| 142 |
+
│ └─> If hit: Return cached (STALE - has 0 contexts)
|
| 143 |
+
└─> _retrieve_from_db() (NEVER REACHED if cache hit)
|
| 144 |
+
└─> Query: SELECT FROM interaction_contexts WHERE session_id = ?
|
| 145 |
+
└─> Would return stored contexts, but never called
|
| 146 |
+
```
|
| 147 |
+
|
| 148 |
+
## Database Verification
|
| 149 |
+
|
| 150 |
+
The interaction context **IS being stored** correctly. Evidence:
|
| 151 |
+
|
| 152 |
+
1. **Log Entry**:
|
| 153 |
+
```
|
| 154 |
+
2025-10-31 06:55:55,481 - src.context_manager - INFO - ✓ Generated interaction context for 64d4ace2_15ca4dec_1761890055
|
| 155 |
+
```
|
| 156 |
+
|
| 157 |
+
2. **Storage Code** (src/context_manager.py:426-438):
|
| 158 |
+
```python
|
| 159 |
+
cursor.execute("""
|
| 160 |
+
INSERT OR REPLACE INTO interaction_contexts
|
| 161 |
+
(interaction_id, session_id, user_input, system_response, interaction_summary, created_at)
|
| 162 |
+
VALUES (?, ?, ?, ?, ?, ?)
|
| 163 |
+
""", (interaction_id, session_id, user_input[:500], system_response[:1000], summary.strip(), datetime.now().isoformat()))
|
| 164 |
+
conn.commit()
|
| 165 |
+
conn.close()
|
| 166 |
+
```
|
| 167 |
+
✅ This executes successfully and commits
|
| 168 |
+
|
| 169 |
+
3. **Retrieval Code** (src/context_manager.py:656-671):
|
| 170 |
+
```python
|
| 171 |
+
cursor.execute("""
|
| 172 |
+
SELECT interaction_summary, created_at, needs_refresh
|
| 173 |
+
FROM interaction_contexts
|
| 174 |
+
WHERE session_id = ? AND (needs_refresh IS NULL OR needs_refresh = 0)
|
| 175 |
+
ORDER BY created_at DESC
|
| 176 |
+
LIMIT 20
|
| 177 |
+
""", (session_id,))
|
| 178 |
+
```
|
| 179 |
+
✅ This query would work, but is never executed due to cache hit
|
| 180 |
+
|
| 181 |
+
## Cache Invalidation Points
|
| 182 |
+
|
| 183 |
+
Current cache invalidation only happens in these scenarios:
|
| 184 |
+
|
| 185 |
+
1. **Session End**: `end_session()` clears cache (line 534-536)
|
| 186 |
+
2. **User Change**: User mismatch detection clears cache (line 254-255)
|
| 187 |
+
3. **Never**: After generating interaction context ❌
|
| 188 |
+
|
| 189 |
+
## Expected vs Actual Behavior
|
| 190 |
+
|
| 191 |
+
### Expected Behavior:
|
| 192 |
+
```
|
| 193 |
+
Request 1 → Generate context → Store in DB → Clear cache
|
| 194 |
+
Request 2 → Cache miss → Query DB → Find stored context → Use it
|
| 195 |
+
```
|
| 196 |
+
|
| 197 |
+
### Actual Behavior:
|
| 198 |
+
```
|
| 199 |
+
Request 1 → Generate context → Store in DB → Keep cache (stale)
|
| 200 |
+
Request 2 → Cache hit → Return stale cache (0 contexts) → Never query DB
|
| 201 |
+
```
|
| 202 |
+
|
| 203 |
+
## Evidence from Logs
|
| 204 |
+
|
| 205 |
+
```
|
| 206 |
+
# First Request - Context Generation
|
| 207 |
+
2025-10-31 06:55:55,481 - src.context_manager - INFO - ✓ Generated interaction context for 64d4ace2_15ca4dec_1761890055
|
| 208 |
+
|
| 209 |
+
# Second Request - Cache Hit (No DB Query)
|
| 210 |
+
2025-10-31 07:02:55,911 - src.context_manager - INFO - Context retrieved: 0 interaction contexts
|
| 211 |
+
```
|
| 212 |
+
|
| 213 |
+
**Time Gap**: 7 minutes between requests (well beyond 5-second orchestrator cache TTL)
|
| 214 |
+
**Result**: Still 0 contexts → Session cache hit, database never queried
|
| 215 |
+
|
| 216 |
+
## Impact Assessment
|
| 217 |
+
|
| 218 |
+
### Functional Impact:
|
| 219 |
+
- **HIGH**: Conversation continuity completely broken
|
| 220 |
+
- Users cannot reference previous responses
|
| 221 |
+
- Each request treated as isolated, losing all context
|
| 222 |
+
|
| 223 |
+
### User Experience Impact:
|
| 224 |
+
- **HIGH**: Responses generated for wrong topics
|
| 225 |
+
- Frustration when "based on above inputs" is ignored
|
| 226 |
+
- Loss of trust in system reliability
|
| 227 |
+
|
| 228 |
+
### Performance Impact:
|
| 229 |
+
- **LOW**: Cache is working (too well - preventing fresh data retrieval)
|
| 230 |
+
- Database queries being avoided (but should happen after context generation)
|
| 231 |
+
|
| 232 |
+
## Conclusion
|
| 233 |
+
|
| 234 |
+
The interaction context system is **architecturally sound** but has a **critical cache invalidation bug**:
|
| 235 |
+
|
| 236 |
+
1. ✅ Interaction contexts are correctly generated
|
| 237 |
+
2. ✅ Interaction contexts are correctly stored in database
|
| 238 |
+
3. ✅ Database retrieval query is correctly implemented
|
| 239 |
+
4. ❌ Cache is never invalidated after interaction context generation
|
| 240 |
+
5. ❌ Cache hit prevents database query from executing
|
| 241 |
+
6. ❌ Stale cached context (with 0 interaction contexts) is returned
|
| 242 |
+
|
| 243 |
+
**The fix requires** invalidating both:
|
| 244 |
+
- Context Manager's `session_cache` after `generate_interaction_context()`
|
| 245 |
+
- Orchestrator's `_context_cache` after `generate_interaction_context()`
|
| 246 |
+
|
| 247 |
+
This will force fresh database queries on subsequent requests, allowing stored interaction contexts to be retrieved and used.
|
| 248 |
+
|
| 249 |
+
## Files Involved
|
| 250 |
+
|
| 251 |
+
1. `src/orchestrator_engine.py` - Lines 442-450 (missing cache invalidation)
|
| 252 |
+
2. `src/orchestrator_engine.py` - Lines 83-113 (orchestrator cache)
|
| 253 |
+
3. `src/context_manager.py` - Lines 235-289 (session cache management)
|
| 254 |
+
4. `src/context_manager.py` - Lines 396-451 (interaction context generation)
|
| 255 |
+
|
| 256 |
+
## Additional Notes
|
| 257 |
+
|
| 258 |
+
- The cache mechanism itself is working as designed (performance optimization)
|
| 259 |
+
- The bug is in the **cache lifecycle management** (invalidation timing)
|
| 260 |
+
- Database operations are functioning correctly
|
| 261 |
+
- The issue is purely in the caching layer, not the persistence layer
|
| 262 |
+
|
LLM_BASED_TOPIC_EXTRACTION_IMPLEMENTATION.md
ADDED
|
@@ -0,0 +1,268 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# LLM-Based Topic Extraction Implementation (Option 2)
|
| 2 |
+
|
| 3 |
+
## Summary
|
| 4 |
+
|
| 5 |
+
Successfully implemented Option 2: LLM-based zero-shot classification for topic extraction and continuity analysis, replacing hardcoded pattern matching.
|
| 6 |
+
|
| 7 |
+
## Changes Implemented
|
| 8 |
+
|
| 9 |
+
### 1. Topic Cache Infrastructure
|
| 10 |
+
|
| 11 |
+
**Location**: `src/orchestrator_engine.py` - `__init__()` (lines 34-36)
|
| 12 |
+
|
| 13 |
+
**Added**:
|
| 14 |
+
```python
|
| 15 |
+
# Cache for topic extraction to reduce API calls
|
| 16 |
+
self._topic_cache = {}
|
| 17 |
+
self._topic_cache_max_size = 100 # Limit cache size
|
| 18 |
+
```
|
| 19 |
+
|
| 20 |
+
**Purpose**: Cache topic extraction results to minimize LLM API calls for identical queries.
|
| 21 |
+
|
| 22 |
+
---
|
| 23 |
+
|
| 24 |
+
### 2. LLM-Based Topic Extraction
|
| 25 |
+
|
| 26 |
+
**Location**: `src/orchestrator_engine.py` - `_extract_main_topic()` (lines 1276-1343)
|
| 27 |
+
|
| 28 |
+
**Changes**:
|
| 29 |
+
- **Method signature**: Changed to `async def _extract_main_topic(self, user_input: str, context: dict = None) -> str`
|
| 30 |
+
- **Implementation**: Uses LLM zero-shot classification instead of hardcoded keywords
|
| 31 |
+
- **Context-aware**: Uses session_context and interaction_contexts from cache when available
|
| 32 |
+
- **Caching**: Implements cache with FIFO eviction (max 100 entries)
|
| 33 |
+
- **Fallback**: Falls back to simple word extraction if LLM unavailable
|
| 34 |
+
|
| 35 |
+
**LLM Prompt**:
|
| 36 |
+
```
|
| 37 |
+
Classify the main topic of this query in 2-5 words. Be specific and concise.
|
| 38 |
+
|
| 39 |
+
Query: "{user_input}"
|
| 40 |
+
[Session context if available]
|
| 41 |
+
|
| 42 |
+
Respond with ONLY the topic name (e.g., "Machine Learning", "Healthcare Analytics").
|
| 43 |
+
```
|
| 44 |
+
|
| 45 |
+
**Temperature**: 0.3 (for consistency)
|
| 46 |
+
|
| 47 |
+
---
|
| 48 |
+
|
| 49 |
+
### 3. LLM-Based Topic Continuity Analysis
|
| 50 |
+
|
| 51 |
+
**Location**: `src/orchestrator_engine.py` - `_analyze_topic_continuity()` (lines 1029-1094)
|
| 52 |
+
|
| 53 |
+
**Changes**:
|
| 54 |
+
- **Method signature**: Changed to `async def _analyze_topic_continuity(self, context: dict, user_input: str) -> str`
|
| 55 |
+
- **Implementation**: Uses LLM to determine if query continues previous topic or introduces new topic
|
| 56 |
+
- **Context-aware**: Uses session_context and interaction_contexts from cache
|
| 57 |
+
- **Format validation**: Validates LLM response format ("Continuing X" or "New topic: X")
|
| 58 |
+
- **Fallback**: Returns descriptive message if LLM unavailable
|
| 59 |
+
|
| 60 |
+
**LLM Prompt**:
|
| 61 |
+
```
|
| 62 |
+
Determine if the current query continues the previous conversation topic or introduces a new topic.
|
| 63 |
+
|
| 64 |
+
Session Summary: {session_summary}
|
| 65 |
+
Recent Interactions: {recent_interactions}
|
| 66 |
+
|
| 67 |
+
Current Query: "{user_input}"
|
| 68 |
+
|
| 69 |
+
Respond with EXACTLY one of:
|
| 70 |
+
- "Continuing [topic name] discussion" if same topic
|
| 71 |
+
- "New topic: [topic name]" if different topic
|
| 72 |
+
```
|
| 73 |
+
|
| 74 |
+
**Temperature**: 0.3 (for consistency)
|
| 75 |
+
|
| 76 |
+
---
|
| 77 |
+
|
| 78 |
+
### 4. Keyword Extraction Update
|
| 79 |
+
|
| 80 |
+
**Location**: `src/orchestrator_engine.py` - `_extract_keywords()` (lines 1345-1361)
|
| 81 |
+
|
| 82 |
+
**Changes**:
|
| 83 |
+
- **Method signature**: Changed to `async def _extract_keywords(self, user_input: str) -> str`
|
| 84 |
+
- **Implementation**: Simple regex-based extraction (not LLM-based for performance)
|
| 85 |
+
- **Stop word filtering**: Filters common stop words
|
| 86 |
+
- **Note**: Can be enhanced with LLM if needed, but kept simple for performance
|
| 87 |
+
|
| 88 |
+
---
|
| 89 |
+
|
| 90 |
+
### 5. Updated All Usage Sites
|
| 91 |
+
|
| 92 |
+
**Location**: `src/orchestrator_engine.py` - `process_request()` (lines 184-200)
|
| 93 |
+
|
| 94 |
+
**Changes**:
|
| 95 |
+
- **Extract topic once**: `main_topic = await self._extract_main_topic(user_input, context)`
|
| 96 |
+
- **Extract continuity**: `topic_continuity = await self._analyze_topic_continuity(context, user_input)`
|
| 97 |
+
- **Extract keywords**: `query_keywords = await self._extract_keywords(user_input)`
|
| 98 |
+
- **Reuse main_topic**: All 18+ usage sites now use the `main_topic` variable instead of calling method repeatedly
|
| 99 |
+
|
| 100 |
+
**Updated Reasoning Chain Steps**:
|
| 101 |
+
- Step 1: Uses `main_topic` (line 190)
|
| 102 |
+
- Step 2: Uses `main_topic` (line 251, 259)
|
| 103 |
+
- Step 3: Uses `main_topic` (line 268, 276)
|
| 104 |
+
- Step 4: Uses `main_topic` (line 304, 312)
|
| 105 |
+
- Step 5: Uses `main_topic` (line 384, 392)
|
| 106 |
+
- Alternative paths: Uses `main_topic` (line 403, 1146-1166)
|
| 107 |
+
|
| 108 |
+
**Error Recovery**: Simplified to avoid async complexity (line 1733)
|
| 109 |
+
|
| 110 |
+
---
|
| 111 |
+
|
| 112 |
+
### 6. Alternative Paths Method Update
|
| 113 |
+
|
| 114 |
+
**Location**: `src/orchestrator_engine.py` - `_generate_alternative_paths()` (lines 1136-1169)
|
| 115 |
+
|
| 116 |
+
**Changes**:
|
| 117 |
+
- **Method signature**: Added `main_topic` parameter
|
| 118 |
+
- **Before**: `def _generate_alternative_paths(self, intent_result: dict, user_input: str) -> list:`
|
| 119 |
+
- **After**: `def _generate_alternative_paths(self, intent_result: dict, user_input: str, main_topic: str) -> list:`
|
| 120 |
+
- **Updated call site**: Line 403 passes `main_topic` as third parameter
|
| 121 |
+
|
| 122 |
+
---
|
| 123 |
+
|
| 124 |
+
## Performance Characteristics
|
| 125 |
+
|
| 126 |
+
### Latency Impact
|
| 127 |
+
|
| 128 |
+
**Per Request**:
|
| 129 |
+
- 2 LLM calls per request (topic extraction + continuity analysis)
|
| 130 |
+
- Estimated latency: ~200-500ms total (depending on LLM router)
|
| 131 |
+
- Caching reduces repeat calls: Cache hit = 0ms latency
|
| 132 |
+
|
| 133 |
+
**Mitigation**:
|
| 134 |
+
- Topic extraction cached per unique query (MD5 hash)
|
| 135 |
+
- Cache size limited to 100 entries (FIFO eviction)
|
| 136 |
+
- Keywords extraction kept simple (no LLM, minimal latency)
|
| 137 |
+
|
| 138 |
+
### API Costs
|
| 139 |
+
|
| 140 |
+
**Per Request**:
|
| 141 |
+
- Topic extraction: ~50-100 tokens
|
| 142 |
+
- Topic continuity: ~100-150 tokens
|
| 143 |
+
- Total: ~150-250 tokens per request (first time)
|
| 144 |
+
- Cached requests: 0 tokens
|
| 145 |
+
|
| 146 |
+
**Monthly Estimate** (assuming 1000 unique queries/day):
|
| 147 |
+
- First requests: ~150-250k tokens/day = ~4.5-7.5M tokens/month
|
| 148 |
+
- Subsequent requests: Cached, 0 tokens
|
| 149 |
+
- Actual usage depends on cache hit rate
|
| 150 |
+
|
| 151 |
+
---
|
| 152 |
+
|
| 153 |
+
## Error Handling
|
| 154 |
+
|
| 155 |
+
### Fallback Mechanisms
|
| 156 |
+
|
| 157 |
+
1. **Topic Extraction**:
|
| 158 |
+
- If LLM unavailable: Falls back to first 4 words of query
|
| 159 |
+
- If LLM error: Logs error, returns fallback
|
| 160 |
+
- Cache miss handling: Generates and caches
|
| 161 |
+
|
| 162 |
+
2. **Topic Continuity**:
|
| 163 |
+
- If LLM unavailable: Returns "Topic continuity analysis unavailable"
|
| 164 |
+
- If no context: Returns "No previous context"
|
| 165 |
+
- If LLM error: Logs error, returns "Topic continuity analysis failed"
|
| 166 |
+
|
| 167 |
+
3. **Keywords**:
|
| 168 |
+
- Simple extraction, no LLM dependency
|
| 169 |
+
- Error handling: Returns "General terms" on exception
|
| 170 |
+
|
| 171 |
+
---
|
| 172 |
+
|
| 173 |
+
## Testing Recommendations
|
| 174 |
+
|
| 175 |
+
### Unit Tests
|
| 176 |
+
|
| 177 |
+
1. **Topic Extraction**:
|
| 178 |
+
- Test LLM-based extraction with various queries
|
| 179 |
+
- Test caching behavior (cache hit/miss)
|
| 180 |
+
- Test fallback behavior when LLM unavailable
|
| 181 |
+
- Test context-aware extraction
|
| 182 |
+
|
| 183 |
+
2. **Topic Continuity**:
|
| 184 |
+
- Test continuation detection
|
| 185 |
+
- Test new topic detection
|
| 186 |
+
- Test with empty context
|
| 187 |
+
- Test format validation
|
| 188 |
+
|
| 189 |
+
3. **Integration Tests**:
|
| 190 |
+
- Test full request flow with LLM calls
|
| 191 |
+
- Test cache persistence across requests
|
| 192 |
+
- Test error recovery with LLM failures
|
| 193 |
+
|
| 194 |
+
### Performance Tests
|
| 195 |
+
|
| 196 |
+
1. **Latency Measurement**:
|
| 197 |
+
- Measure average latency with LLM calls
|
| 198 |
+
- Measure latency with cache hits
|
| 199 |
+
- Compare to previous pattern-based approach
|
| 200 |
+
|
| 201 |
+
2. **Cache Effectiveness**:
|
| 202 |
+
- Measure cache hit rate
|
| 203 |
+
- Test cache eviction behavior
|
| 204 |
+
|
| 205 |
+
---
|
| 206 |
+
|
| 207 |
+
## Migration Notes
|
| 208 |
+
|
| 209 |
+
### Breaking Changes
|
| 210 |
+
|
| 211 |
+
**None**: All changes are internal to orchestrator. External API unchanged.
|
| 212 |
+
|
| 213 |
+
### Compatibility
|
| 214 |
+
|
| 215 |
+
- **LLM Router Required**: System requires `llm_router` to be available
|
| 216 |
+
- **Graceful Degradation**: Falls back to simple extraction if LLM unavailable
|
| 217 |
+
- **Backward Compatible**: Old pattern-based code removed, but fallbacks maintain functionality
|
| 218 |
+
|
| 219 |
+
---
|
| 220 |
+
|
| 221 |
+
## Benefits Realized
|
| 222 |
+
|
| 223 |
+
✅ **Accurate Topic Classification**: LLM understands context, synonyms, nuances
|
| 224 |
+
✅ **Domain Adaptive**: Works for any domain without code changes
|
| 225 |
+
✅ **Context-Aware**: Uses session_context and interaction_contexts
|
| 226 |
+
✅ **Human-Readable**: Maintains descriptive reasoning chain strings
|
| 227 |
+
✅ **Scalable**: No manual keyword list maintenance
|
| 228 |
+
✅ **Cached**: Reduces API calls for repeated queries
|
| 229 |
+
|
| 230 |
+
---
|
| 231 |
+
|
| 232 |
+
## Trade-offs
|
| 233 |
+
|
| 234 |
+
⚠️ **Latency**: Adds ~200-500ms per request (first time, cached after)
|
| 235 |
+
⚠️ **API Costs**: ~150-250 tokens per request (first time)
|
| 236 |
+
⚠️ **LLM Dependency**: Requires LLM router to be functional
|
| 237 |
+
⚠️ **Complexity**: More code to maintain (async handling, caching, error handling)
|
| 238 |
+
⚠️ **Inconsistency Risk**: LLM responses may vary slightly (mitigated by temperature=0.3)
|
| 239 |
+
|
| 240 |
+
---
|
| 241 |
+
|
| 242 |
+
## Files Modified
|
| 243 |
+
|
| 244 |
+
1. `src/orchestrator_engine.py`:
|
| 245 |
+
- Added topic cache infrastructure
|
| 246 |
+
- Rewrote `_extract_main_topic()` to use LLM
|
| 247 |
+
- Rewrote `_analyze_topic_continuity()` to use LLM
|
| 248 |
+
- Updated `_extract_keywords()` to async
|
| 249 |
+
- Updated all 18+ usage sites to use cached `main_topic`
|
| 250 |
+
- Updated `_generate_alternative_paths()` signature
|
| 251 |
+
|
| 252 |
+
---
|
| 253 |
+
|
| 254 |
+
## Next Steps
|
| 255 |
+
|
| 256 |
+
1. **Monitor Performance**: Track latency and cache hit rates
|
| 257 |
+
2. **Tune Caching**: Adjust cache size based on usage patterns
|
| 258 |
+
3. **Optional Enhancements**:
|
| 259 |
+
- Consider LLM-based keyword extraction if needed
|
| 260 |
+
- Add topic extraction metrics/logging
|
| 261 |
+
- Implement cache persistence across restarts
|
| 262 |
+
|
| 263 |
+
---
|
| 264 |
+
|
| 265 |
+
## Conclusion
|
| 266 |
+
|
| 267 |
+
Option 2 implementation complete. System now uses LLM-based zero-shot classification for topic extraction and continuity analysis, providing accurate, context-aware topic classification without hardcoded patterns. Caching minimizes latency and API costs for repeated queries.
|
| 268 |
+
|
PATTERN_BASED_TOPIC_ANALYSIS_REVIEW.md
ADDED
|
@@ -0,0 +1,374 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Pattern-Based Topic Analysis Review and Options
|
| 2 |
+
|
| 3 |
+
## Executive Summary
|
| 4 |
+
|
| 5 |
+
The orchestrator uses hardcoded pattern matching for topic extraction and continuity analysis in three methods:
|
| 6 |
+
1. `_analyze_topic_continuity()` - Hardcoded keyword matching (ML, AI, Data Science only)
|
| 7 |
+
2. `_extract_main_topic()` - Hardcoded keyword matching (10+ topic categories)
|
| 8 |
+
3. `_extract_keywords()` - Hardcoded important terms list
|
| 9 |
+
|
| 10 |
+
These methods are used extensively throughout the workflow, affecting reasoning chains, hypothesis generation, and agent execution tracking.
|
| 11 |
+
|
| 12 |
+
## Current Implementation Analysis
|
| 13 |
+
|
| 14 |
+
### 1. `_analyze_topic_continuity()` (Lines 1026-1069)
|
| 15 |
+
|
| 16 |
+
**Current Approach:**
|
| 17 |
+
- Pattern matching against 3 hardcoded topics: "machine learning", "artificial intelligence", "data science"
|
| 18 |
+
- Checks session context summary and interaction context summaries for keywords
|
| 19 |
+
- Returns: "Continuing {topic} discussion" or "New topic: {topic}"
|
| 20 |
+
|
| 21 |
+
**Limitations:**
|
| 22 |
+
- Only recognizes 3 topics
|
| 23 |
+
- Misses domain-specific topics (e.g., healthcare, finance, legal)
|
| 24 |
+
- Misses nuanced topics (e.g., "transformer architectures" → classified as "general")
|
| 25 |
+
- Brittle: fails on synonyms, typos, or domain-specific terminology
|
| 26 |
+
- Not learning-enabled: requires manual updates for new domains
|
| 27 |
+
|
| 28 |
+
**Usage:**
|
| 29 |
+
- Reasoning chain step_1 evidence (line 187)
|
| 30 |
+
- Used once per request for context analysis
|
| 31 |
+
|
| 32 |
+
### 2. `_extract_main_topic()` (Lines 1251-1279)
|
| 33 |
+
|
| 34 |
+
**Current Approach:**
|
| 35 |
+
- Pattern matching against 10+ topic categories:
|
| 36 |
+
- AI chatbot course curriculum
|
| 37 |
+
- Programming course curriculum
|
| 38 |
+
- Educational course design
|
| 39 |
+
- Machine learning concepts
|
| 40 |
+
- Artificial intelligence and chatbots
|
| 41 |
+
- Data science and analysis
|
| 42 |
+
- Software development and programming
|
| 43 |
+
- General inquiry (fallback)
|
| 44 |
+
|
| 45 |
+
**Limitations:**
|
| 46 |
+
- Hardcoded keyword lists
|
| 47 |
+
- Hierarchical but limited (e.g., curriculum → AI vs Programming)
|
| 48 |
+
- Fallback to first 4 words if no match
|
| 49 |
+
- Same brittleness as topic continuity
|
| 50 |
+
|
| 51 |
+
**Usage:**
|
| 52 |
+
- **Extensively used (18 times):**
|
| 53 |
+
- Reasoning chain step_1 hypothesis (line 182)
|
| 54 |
+
- Reasoning chain step_1 reasoning (line 191)
|
| 55 |
+
- Reasoning chain step_2 reasoning (skills) (line 238)
|
| 56 |
+
- Reasoning chain step_3 hypothesis (line 243)
|
| 57 |
+
- Reasoning chain step_3 reasoning (line 251)
|
| 58 |
+
- Reasoning chain step_4 hypothesis (line 260)
|
| 59 |
+
- Reasoning chain step_4 reasoning (line 268)
|
| 60 |
+
- Reasoning chain step_5 hypothesis (line 296)
|
| 61 |
+
- Reasoning chain step_5 reasoning (line 304)
|
| 62 |
+
- Reasoning chain step_6 hypothesis (line 376)
|
| 63 |
+
- Reasoning chain step_6 reasoning (line 384)
|
| 64 |
+
- Alternative reasoning paths (line 1110)
|
| 65 |
+
- Error recovery (line 1665)
|
| 66 |
+
|
| 67 |
+
### 3. `_extract_keywords()` (Lines 1281-1295)
|
| 68 |
+
|
| 69 |
+
**Current Approach:**
|
| 70 |
+
- Extracts keywords from hardcoded important terms list
|
| 71 |
+
- Returns comma-separated string of matched keywords
|
| 72 |
+
|
| 73 |
+
**Limitations:**
|
| 74 |
+
- Static list requires manual updates
|
| 75 |
+
- May miss domain-specific terminology
|
| 76 |
+
|
| 77 |
+
**Usage:**
|
| 78 |
+
- Reasoning chain step_1 evidence (line 188)
|
| 79 |
+
- Used once per request
|
| 80 |
+
|
| 81 |
+
## Current Workflow Impact
|
| 82 |
+
|
| 83 |
+
### Pattern Matching Usage Flow:
|
| 84 |
+
|
| 85 |
+
```
|
| 86 |
+
Request → Context Retrieval
|
| 87 |
+
↓
|
| 88 |
+
Reasoning Chain Step 1:
|
| 89 |
+
- Hypothesis: Uses _extract_main_topic() → "User is asking about: '{topic}'"
|
| 90 |
+
- Evidence: Uses _analyze_topic_continuity() → "Topic continuity: ..."
|
| 91 |
+
- Evidence: Uses _extract_keywords() → "Query keywords: ..."
|
| 92 |
+
- Reasoning: Uses _extract_main_topic() → "...focused on {topic}..."
|
| 93 |
+
↓
|
| 94 |
+
Intent Recognition (Agent executes independently)
|
| 95 |
+
↓
|
| 96 |
+
Reasoning Chain Step 2-6:
|
| 97 |
+
- All hypothesis/reasoning strings use _extract_main_topic()
|
| 98 |
+
- Topic appears in 12+ reasoning chain fields
|
| 99 |
+
↓
|
| 100 |
+
Alternative Reasoning Paths:
|
| 101 |
+
- Uses _extract_main_topic() for path generation
|
| 102 |
+
↓
|
| 103 |
+
Error Recovery:
|
| 104 |
+
- Uses _extract_main_topic() for error context
|
| 105 |
+
```
|
| 106 |
+
|
| 107 |
+
### Impact Points:
|
| 108 |
+
|
| 109 |
+
1. **Reasoning Chain Documentation**: All reasoning chain steps include topic strings
|
| 110 |
+
2. **Agent Execution Tracking**: Topic appears in hypothesis and reasoning fields
|
| 111 |
+
3. **Error Recovery**: Uses topic for context in error scenarios
|
| 112 |
+
4. **Logging/Debugging**: Topic strings appear in logs and execution traces
|
| 113 |
+
|
| 114 |
+
**Important Note:** Pattern matching does NOT affect agent execution logic. Agents (Intent, Skills, Synthesis, Safety) execute independently using LLM inference. Pattern matching only affects:
|
| 115 |
+
- Reasoning chain metadata (for debugging/analysis)
|
| 116 |
+
- Logging messages
|
| 117 |
+
- Hypothesis/reasoning strings in execution traces
|
| 118 |
+
|
| 119 |
+
## Options for Resolution
|
| 120 |
+
|
| 121 |
+
### Option 1: Remove Pattern Matching, Make Context Independent
|
| 122 |
+
|
| 123 |
+
**Approach:**
|
| 124 |
+
- Remove `_analyze_topic_continuity()`, `_extract_main_topic()`, `_extract_keywords()`
|
| 125 |
+
- Replace with generic placeholders or remove from reasoning chains
|
| 126 |
+
- Use actual context data (session_context, interaction_contexts, user_context) directly
|
| 127 |
+
|
| 128 |
+
**Implementation Changes:**
|
| 129 |
+
|
| 130 |
+
1. **Replace topic extraction with context-based strings:**
|
| 131 |
+
```python
|
| 132 |
+
# Before:
|
| 133 |
+
hypothesis = f"User is asking about: '{self._extract_main_topic(user_input)}'"
|
| 134 |
+
|
| 135 |
+
# After:
|
| 136 |
+
hypothesis = f"User query analyzed with {len(interaction_contexts)} previous contexts"
|
| 137 |
+
```
|
| 138 |
+
|
| 139 |
+
2. **Replace topic continuity with context-based analysis:**
|
| 140 |
+
```python
|
| 141 |
+
# Before:
|
| 142 |
+
f"Topic continuity: {self._analyze_topic_continuity(context, user_input)}"
|
| 143 |
+
|
| 144 |
+
# After:
|
| 145 |
+
f"Session context available: {bool(session_context)}"
|
| 146 |
+
f"Interaction contexts: {len(interaction_contexts)}"
|
| 147 |
+
```
|
| 148 |
+
|
| 149 |
+
3. **Replace keywords with user input excerpt:**
|
| 150 |
+
```python
|
| 151 |
+
# Before:
|
| 152 |
+
f"Query keywords: {self._extract_keywords(user_input)}"
|
| 153 |
+
|
| 154 |
+
# After:
|
| 155 |
+
f"Query: {user_input[:100]}..."
|
| 156 |
+
```
|
| 157 |
+
|
| 158 |
+
**Impact Analysis:**
|
| 159 |
+
|
| 160 |
+
✅ **Benefits:**
|
| 161 |
+
- **No hardcoded patterns**: Context independent of pattern learning
|
| 162 |
+
- **Simpler code**: Removes 100+ lines of pattern matching logic
|
| 163 |
+
- **More accurate**: Uses actual context data instead of brittle keyword matching
|
| 164 |
+
- **Domain agnostic**: Works for any topic/domain without updates
|
| 165 |
+
- **Maintainability**: No need to update keyword lists for new domains
|
| 166 |
+
- **Performance**: No pattern matching overhead (minimal, but measurable)
|
| 167 |
+
|
| 168 |
+
❌ **Drawbacks:**
|
| 169 |
+
- **Less descriptive reasoning chains**: Hypothesis strings less specific (e.g., "User query analyzed" vs "User is asking about: Machine learning concepts")
|
| 170 |
+
- **Reduced human readability**: Reasoning chain traces less informative for debugging
|
| 171 |
+
- **Lost topic continuity insight**: No explicit "continuing topic X" vs "new topic Y" distinction
|
| 172 |
+
|
| 173 |
+
**Workflow Impact:**
|
| 174 |
+
- **No impact on agent execution**: Agents already use LLM inference, not pattern matching
|
| 175 |
+
- **Reasoning chains less informative**: But still functional for debugging
|
| 176 |
+
- **Logging less specific**: But still captures context availability
|
| 177 |
+
- **No breaking changes**: All downstream components work with generic strings
|
| 178 |
+
|
| 179 |
+
**Files Modified:**
|
| 180 |
+
- `src/orchestrator_engine.py`: Remove 3 methods, update 18+ usage sites
|
| 181 |
+
|
| 182 |
+
**Estimated Effort:** Low (1-2 hours)
|
| 183 |
+
**Risk Level:** Low (only affects metadata, not logic)
|
| 184 |
+
|
| 185 |
+
---
|
| 186 |
+
|
| 187 |
+
### Option 2: Use LLM API for Zero-Shot Classification
|
| 188 |
+
|
| 189 |
+
**Approach:**
|
| 190 |
+
- Replace pattern matching with LLM-based zero-shot topic classification
|
| 191 |
+
- Use LLM router to classify topics dynamically
|
| 192 |
+
- Cache results to minimize API calls
|
| 193 |
+
|
| 194 |
+
**Implementation Changes:**
|
| 195 |
+
|
| 196 |
+
1. **Create LLM-based topic extraction:**
|
| 197 |
+
```python
|
| 198 |
+
async def _extract_main_topic_llm(self, user_input: str, context: dict) -> str:
|
| 199 |
+
"""Extract topic using LLM zero-shot classification"""
|
| 200 |
+
prompt = f"""Classify the main topic of this query in 2-5 words:
|
| 201 |
+
|
| 202 |
+
Query: "{user_input}"
|
| 203 |
+
|
| 204 |
+
Available context:
|
| 205 |
+
- Session summary: {context.get('session_context', {}).get('summary', 'N/A')[:200]}
|
| 206 |
+
- Recent interactions: {len(context.get('interaction_contexts', []))}
|
| 207 |
+
|
| 208 |
+
Respond with ONLY the topic name (e.g., "Machine Learning", "Healthcare Analytics", "Financial Modeling")."""
|
| 209 |
+
|
| 210 |
+
topic = await self.llm_router.route_inference(
|
| 211 |
+
task_type="classification",
|
| 212 |
+
prompt=prompt,
|
| 213 |
+
max_tokens=20,
|
| 214 |
+
temperature=0.3
|
| 215 |
+
)
|
| 216 |
+
return topic.strip() if topic else "General inquiry"
|
| 217 |
+
```
|
| 218 |
+
|
| 219 |
+
2. **Create LLM-based topic continuity:**
|
| 220 |
+
```python
|
| 221 |
+
async def _analyze_topic_continuity_llm(self, context: dict, user_input: str) -> str:
|
| 222 |
+
"""Analyze topic continuity using LLM"""
|
| 223 |
+
session_context = context.get('session_context', {}).get('summary', '')
|
| 224 |
+
recent_interactions = context.get('interaction_contexts', [])[:3]
|
| 225 |
+
|
| 226 |
+
prompt = f"""Determine if the current query continues the previous conversation topic or introduces a new topic.
|
| 227 |
+
|
| 228 |
+
Session Summary: {session_context[:300]}
|
| 229 |
+
Recent Interactions:
|
| 230 |
+
{chr(10).join([ic.get('summary', '') for ic in recent_interactions])}
|
| 231 |
+
|
| 232 |
+
Current Query: "{user_input}"
|
| 233 |
+
|
| 234 |
+
Respond with one of:
|
| 235 |
+
- "Continuing [topic] discussion" if same topic
|
| 236 |
+
- "New topic: [topic]" if different topic
|
| 237 |
+
|
| 238 |
+
Keep response under 50 words."""
|
| 239 |
+
|
| 240 |
+
continuity = await self.llm_router.route_inference(
|
| 241 |
+
task_type="general_reasoning",
|
| 242 |
+
prompt=prompt,
|
| 243 |
+
max_tokens=50,
|
| 244 |
+
temperature=0.5
|
| 245 |
+
)
|
| 246 |
+
return continuity.strip() if continuity else "No previous context"
|
| 247 |
+
```
|
| 248 |
+
|
| 249 |
+
3. **Update method signatures to async:**
|
| 250 |
+
- `_extract_main_topic()` → `async def _extract_main_topic_llm()`
|
| 251 |
+
- `_analyze_topic_continuity()` → `async def _analyze_topic_continuity_llm()`
|
| 252 |
+
- `_extract_keywords()` → Keep pattern-based or remove (keywords less critical)
|
| 253 |
+
|
| 254 |
+
4. **Add caching:**
|
| 255 |
+
```python
|
| 256 |
+
# Cache topic extraction per user_input (hash)
|
| 257 |
+
topic_cache = {}
|
| 258 |
+
input_hash = hashlib.md5(user_input.encode()).hexdigest()
|
| 259 |
+
if input_hash in topic_cache:
|
| 260 |
+
return topic_cache[input_hash]
|
| 261 |
+
topic = await _extract_main_topic_llm(...)
|
| 262 |
+
topic_cache[input_hash] = topic
|
| 263 |
+
return topic
|
| 264 |
+
```
|
| 265 |
+
|
| 266 |
+
**Impact Analysis:**
|
| 267 |
+
|
| 268 |
+
✅ **Benefits:**
|
| 269 |
+
- **Accurate topic classification**: LLM understands context, synonyms, nuances
|
| 270 |
+
- **Domain adaptive**: Works for any domain without code changes
|
| 271 |
+
- **Context-aware**: Uses session_context and interaction_contexts for continuity
|
| 272 |
+
- **Human-readable**: Maintains descriptive reasoning chain strings
|
| 273 |
+
- **Scalable**: No manual keyword list maintenance
|
| 274 |
+
|
| 275 |
+
❌ **Drawbacks:**
|
| 276 |
+
- **API latency**: Adds 2-3 LLM calls per request (~200-500ms total)
|
| 277 |
+
- **API costs**: Additional tokens consumed per request
|
| 278 |
+
- **Dependency on LLM availability**: Requires LLM router to be functional
|
| 279 |
+
- **Complexity**: More code to maintain (async handling, caching, error handling)
|
| 280 |
+
- **Inconsistency risk**: LLM responses may vary slightly between calls (though temperature=0.3 mitigates)
|
| 281 |
+
|
| 282 |
+
**Workflow Impact:**
|
| 283 |
+
|
| 284 |
+
**Positive:**
|
| 285 |
+
- **More accurate reasoning chains**: Topic classification more reliable
|
| 286 |
+
- **Better debugging**: More informative hypothesis/reasoning strings
|
| 287 |
+
- **Context-aware continuity**: Uses actual session/interaction contexts
|
| 288 |
+
|
| 289 |
+
**Negative:**
|
| 290 |
+
- **Latency increase**: +200-500ms per request (2-3 LLM calls)
|
| 291 |
+
- **Error handling complexity**: Need fallbacks if LLM calls fail
|
| 292 |
+
- **Async complexity**: All 18+ usage sites need await statements
|
| 293 |
+
|
| 294 |
+
**Implementation Complexity:**
|
| 295 |
+
- **Method conversion**: 3 methods → async LLM calls
|
| 296 |
+
- **Usage site updates**: 18+ sites need await/async propagation
|
| 297 |
+
- **Caching infrastructure**: Add cache layer to reduce API calls
|
| 298 |
+
- **Error handling**: Fallbacks if LLM unavailable
|
| 299 |
+
- **Testing**: Verify LLM responses are reasonable
|
| 300 |
+
|
| 301 |
+
**Files Modified:**
|
| 302 |
+
- `src/orchestrator_engine.py`: Rewrite 3 methods, update 18+ usage sites with async/await
|
| 303 |
+
- May need `process_request()` refactoring for async topic extraction
|
| 304 |
+
|
| 305 |
+
**Estimated Effort:** Medium-High (4-6 hours)
|
| 306 |
+
**Risk Level:** Medium (adds latency and LLM dependency)
|
| 307 |
+
|
| 308 |
+
---
|
| 309 |
+
|
| 310 |
+
## Recommendation
|
| 311 |
+
|
| 312 |
+
### Recommended: **Option 1 (Remove Pattern Matching)**
|
| 313 |
+
|
| 314 |
+
**Rationale:**
|
| 315 |
+
1. **No impact on core functionality**: Pattern matching only affects metadata strings, not agent execution
|
| 316 |
+
2. **Simpler implementation**: Low risk, fast to implement
|
| 317 |
+
3. **No performance penalty**: Removes overhead instead of adding LLM calls
|
| 318 |
+
4. **Maintainability**: Less code to maintain
|
| 319 |
+
5. **Context independence**: Aligns with user requirement for pattern-independent context
|
| 320 |
+
|
| 321 |
+
**If descriptive reasoning chains are critical:**
|
| 322 |
+
- **Hybrid Approach**: Use Option 1 for production, but add optional LLM-based topic extraction as a debug/logging enhancement (non-blocking, optional)
|
| 323 |
+
|
| 324 |
+
### Alternative: **Option 2 (LLM Classification) if reasoning chain quality is critical**
|
| 325 |
+
|
| 326 |
+
**Use Case:**
|
| 327 |
+
- If reasoning chain metadata is used for:
|
| 328 |
+
- User-facing explanations
|
| 329 |
+
- Advanced debugging/analysis tools
|
| 330 |
+
- External integrations requiring topic metadata
|
| 331 |
+
- Then the latency/API cost may be justified
|
| 332 |
+
|
| 333 |
+
## Migration Path
|
| 334 |
+
|
| 335 |
+
### Option 1 Implementation Steps:
|
| 336 |
+
|
| 337 |
+
1. **Remove methods:**
|
| 338 |
+
- Delete `_analyze_topic_continuity()` (lines 1026-1069)
|
| 339 |
+
- Delete `_extract_main_topic()` (lines 1251-1279)
|
| 340 |
+
- Delete `_extract_keywords()` (lines 1281-1295)
|
| 341 |
+
|
| 342 |
+
2. **Replace usages:**
|
| 343 |
+
- Line 182: `hypothesis` → Use generic: "User query analysis"
|
| 344 |
+
- Line 187: `Topic continuity` → Use: "Session context available: {bool(session_context)}"
|
| 345 |
+
- Line 188: `Query keywords` → Use: "Query: {user_input[:100]}"
|
| 346 |
+
- Line 191: `reasoning` → Remove topic references
|
| 347 |
+
- Lines 238, 243, 251, 260, 268, 296, 304, 376, 384: Remove topic from reasoning strings
|
| 348 |
+
- Line 1110: Remove topic from alternative paths
|
| 349 |
+
- Line 1665: Use generic error context
|
| 350 |
+
|
| 351 |
+
3. **Test:**
|
| 352 |
+
- Verify reasoning chains still populate correctly
|
| 353 |
+
- Verify no syntax errors
|
| 354 |
+
- Verify agents execute normally
|
| 355 |
+
|
| 356 |
+
### Option 2 Implementation Steps:
|
| 357 |
+
|
| 358 |
+
1. **Create async LLM methods** (as shown above)
|
| 359 |
+
2. **Add caching layer**
|
| 360 |
+
3. **Update `process_request()` to await topic extraction before reasoning chain**
|
| 361 |
+
4. **Add error handling with fallbacks**
|
| 362 |
+
5. **Test latency impact**
|
| 363 |
+
6. **Monitor API usage**
|
| 364 |
+
|
| 365 |
+
## Conclusion
|
| 366 |
+
|
| 367 |
+
**Option 1** is recommended for immediate implementation due to:
|
| 368 |
+
- Low risk and complexity
|
| 369 |
+
- No performance penalty
|
| 370 |
+
- Aligns with context independence requirement
|
| 371 |
+
- Pattern matching doesn't affect agent execution
|
| 372 |
+
|
| 373 |
+
**Option 2** should be considered only if reasoning chain metadata quality is critical for user-facing features or advanced debugging.
|
| 374 |
+
|
SESSION_CONTEXT_EVERY_TURN_IMPLEMENTATION.md
ADDED
|
@@ -0,0 +1,236 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Session Context Every Turn - Implementation Summary
|
| 2 |
+
|
| 3 |
+
## Overview
|
| 4 |
+
|
| 5 |
+
Modified the system to generate and use session context at every conversation turn, following the same pattern as interaction contexts. Session context is now available from cache for all agents and components.
|
| 6 |
+
|
| 7 |
+
## Changes Implemented
|
| 8 |
+
|
| 9 |
+
### 1. Session Context Generation at Every Turn
|
| 10 |
+
|
| 11 |
+
**Location**: `src/orchestrator_engine.py` (lines 451-457)
|
| 12 |
+
|
| 13 |
+
**Change**: Added session context generation after interaction context generation, at every turn.
|
| 14 |
+
|
| 15 |
+
```python
|
| 16 |
+
# STEP 3: Generate Session Context after each response (100 tokens)
|
| 17 |
+
# Uses cached interaction contexts, updates database and cache
|
| 18 |
+
try:
|
| 19 |
+
await self.context_manager.generate_session_context(session_id, user_id)
|
| 20 |
+
# Cache is automatically updated by generate_session_context()
|
| 21 |
+
except Exception as e:
|
| 22 |
+
logger.error(f"Error generating session context: {e}", exc_info=True)
|
| 23 |
+
```
|
| 24 |
+
|
| 25 |
+
**Impact**: Session context is now generated and available for every turn, not just at session end.
|
| 26 |
+
|
| 27 |
+
### 2. Session Context Uses Cache (Not Database)
|
| 28 |
+
|
| 29 |
+
**Location**: `src/context_manager.py` - `generate_session_context()` (lines 463-542)
|
| 30 |
+
|
| 31 |
+
**Change**: Modified to use cached interaction contexts instead of querying database.
|
| 32 |
+
|
| 33 |
+
**Before**: Queried database for all interaction contexts
|
| 34 |
+
```python
|
| 35 |
+
cursor.execute("SELECT interaction_summary FROM interaction_contexts WHERE session_id = ?")
|
| 36 |
+
```
|
| 37 |
+
|
| 38 |
+
**After**: Uses cached interaction contexts
|
| 39 |
+
```python
|
| 40 |
+
# Get interaction contexts from cache (no database query)
|
| 41 |
+
session_cache_key = f"session_{session_id}"
|
| 42 |
+
cached_context = self.session_cache.get(session_cache_key)
|
| 43 |
+
interaction_contexts = cached_context.get('interaction_contexts', [])
|
| 44 |
+
```
|
| 45 |
+
|
| 46 |
+
**Impact**: Faster session context generation, no database queries during conversation.
|
| 47 |
+
|
| 48 |
+
### 3. Session Context Cache Update
|
| 49 |
+
|
| 50 |
+
**Location**: `src/context_manager.py` - `_update_cache_with_session_context()` (lines 826-859)
|
| 51 |
+
|
| 52 |
+
**Change**: Added method to update cache immediately after database write, same pattern as interaction contexts.
|
| 53 |
+
|
| 54 |
+
```python
|
| 55 |
+
def _update_cache_with_session_context(self, session_id: str, session_summary: str, created_at: str):
|
| 56 |
+
"""Update cache with new session context immediately after database update"""
|
| 57 |
+
cached_context['session_context'] = {
|
| 58 |
+
"summary": session_summary,
|
| 59 |
+
"timestamp": created_at
|
| 60 |
+
}
|
| 61 |
+
self.session_cache[session_cache_key] = cached_context
|
| 62 |
+
```
|
| 63 |
+
|
| 64 |
+
**Impact**: Cache stays synchronized with database, no queries needed for subsequent requests.
|
| 65 |
+
|
| 66 |
+
### 4. Session Context Included in Context Structure
|
| 67 |
+
|
| 68 |
+
**Location**: `src/context_manager.py` - `_optimize_context()` (lines 570-604)
|
| 69 |
+
|
| 70 |
+
**Change**: Added session context to optimized context structure and `combined_context` string.
|
| 71 |
+
|
| 72 |
+
**Before**:
|
| 73 |
+
```python
|
| 74 |
+
combined_context = "[User Context]...\n[Interaction Contexts]..."
|
| 75 |
+
```
|
| 76 |
+
|
| 77 |
+
**After**:
|
| 78 |
+
```python
|
| 79 |
+
combined_context = "[Session Context]...\n[User Context]...\n[Interaction Contexts]..."
|
| 80 |
+
```
|
| 81 |
+
|
| 82 |
+
**Impact**: Session context now available in `combined_context` for all agents.
|
| 83 |
+
|
| 84 |
+
### 5. Session Context Retrieved from Database on Cache Miss
|
| 85 |
+
|
| 86 |
+
**Location**: `src/context_manager.py` - `_retrieve_from_db()` (lines 707-725)
|
| 87 |
+
|
| 88 |
+
**Change**: Added query to retrieve session context when loading from database (cache miss only).
|
| 89 |
+
|
| 90 |
+
```python
|
| 91 |
+
# Get session context from database
|
| 92 |
+
cursor.execute("""
|
| 93 |
+
SELECT session_summary, created_at
|
| 94 |
+
FROM session_contexts
|
| 95 |
+
WHERE session_id = ?
|
| 96 |
+
ORDER BY created_at DESC
|
| 97 |
+
LIMIT 1
|
| 98 |
+
""", (session_id,))
|
| 99 |
+
```
|
| 100 |
+
|
| 101 |
+
**Impact**: Session context available even on cache miss (first request or cache expiration).
|
| 102 |
+
|
| 103 |
+
### 6. All Agents Updated to Use Session Context
|
| 104 |
+
|
| 105 |
+
#### Intent Agent (`src/agents/intent_agent.py`)
|
| 106 |
+
- **Lines 115-122**: Added session context extraction from cache
|
| 107 |
+
- **Usage**: Includes session context in intent recognition prompt
|
| 108 |
+
|
| 109 |
+
#### Skills Identification Agent (`src/agents/skills_identification_agent.py`)
|
| 110 |
+
- **Lines 230-236**: Added session context extraction from cache
|
| 111 |
+
- **Usage**: Includes session context in market analysis prompt
|
| 112 |
+
|
| 113 |
+
#### Safety Agent (`src/agents/safety_agent.py`)
|
| 114 |
+
- **Lines 158-164**: Added session context extraction from cache
|
| 115 |
+
- **Usage**: Includes session context in safety analysis prompt
|
| 116 |
+
|
| 117 |
+
#### Synthesis Agent (`src/agents/synthesis_agent.py`)
|
| 118 |
+
- **Lines 552-561**: Added session context extraction from cache
|
| 119 |
+
- **Usage**: Includes session context in response synthesis prompt
|
| 120 |
+
|
| 121 |
+
### 7. Orchestrator Context Summary Updated
|
| 122 |
+
|
| 123 |
+
**Location**: `src/orchestrator_engine.py` - `_build_context_summary()` (lines 651-673)
|
| 124 |
+
|
| 125 |
+
**Change**: Added session context to context summary used for task execution.
|
| 126 |
+
|
| 127 |
+
```python
|
| 128 |
+
# Extract session context (from cache)
|
| 129 |
+
session_context = context.get('session_context', {})
|
| 130 |
+
session_summary = session_context.get('summary', '') if isinstance(session_context, dict) else ""
|
| 131 |
+
if session_summary:
|
| 132 |
+
summary_parts.append(f"Session summary: {session_summary[:150]}")
|
| 133 |
+
```
|
| 134 |
+
|
| 135 |
+
**Impact**: Task execution prompts now include session context.
|
| 136 |
+
|
| 137 |
+
## Context Structure
|
| 138 |
+
|
| 139 |
+
Context now includes session context:
|
| 140 |
+
|
| 141 |
+
```python
|
| 142 |
+
context = {
|
| 143 |
+
"session_id": str,
|
| 144 |
+
"user_id": str,
|
| 145 |
+
"user_context": str, # 500-token user persona summary (from cache)
|
| 146 |
+
"session_context": { # 100-token session summary (from cache)
|
| 147 |
+
"summary": str,
|
| 148 |
+
"timestamp": str
|
| 149 |
+
},
|
| 150 |
+
"interaction_contexts": [ # List of interaction summaries (from cache)
|
| 151 |
+
{
|
| 152 |
+
"summary": str, # 50-token interaction summary
|
| 153 |
+
"timestamp": str
|
| 154 |
+
},
|
| 155 |
+
...
|
| 156 |
+
],
|
| 157 |
+
"combined_context": str, # Pre-formatted: "[Session Context]\n[User Context]\n[Interaction Contexts]"
|
| 158 |
+
"preferences": dict,
|
| 159 |
+
"active_tasks": list,
|
| 160 |
+
"last_activity": str
|
| 161 |
+
}
|
| 162 |
+
```
|
| 163 |
+
|
| 164 |
+
## Flow Diagram
|
| 165 |
+
|
| 166 |
+
### Every Conversation Turn:
|
| 167 |
+
|
| 168 |
+
```
|
| 169 |
+
1. User Request
|
| 170 |
+
↓
|
| 171 |
+
2. Context Retrieved (from cache)
|
| 172 |
+
- Session Context: ✅ Available from cache
|
| 173 |
+
- User Context: ✅ Available from cache
|
| 174 |
+
- Interaction Contexts: ✅ Available from cache
|
| 175 |
+
↓
|
| 176 |
+
3. Agents Execute
|
| 177 |
+
- Intent Agent: Uses session context from cache ✅
|
| 178 |
+
- Skills Agent: Uses session context from cache ✅
|
| 179 |
+
- Synthesis Agent: Uses session context from cache ✅
|
| 180 |
+
- Safety Agent: Uses session context from cache ✅
|
| 181 |
+
↓
|
| 182 |
+
4. Response Generated
|
| 183 |
+
↓
|
| 184 |
+
5. Interaction Context Generated
|
| 185 |
+
- Store in DB
|
| 186 |
+
- Update cache immediately ✅
|
| 187 |
+
↓
|
| 188 |
+
6. Session Context Generated (NEW - every turn)
|
| 189 |
+
- Uses cached interaction contexts (no DB query) ✅
|
| 190 |
+
- Store in DB
|
| 191 |
+
- Update cache immediately ✅
|
| 192 |
+
↓
|
| 193 |
+
7. Next Request: All contexts available from cache ✅
|
| 194 |
+
```
|
| 195 |
+
|
| 196 |
+
## Benefits
|
| 197 |
+
|
| 198 |
+
1. **Session Context Available Every Turn**: Agents can use session summary for better context awareness
|
| 199 |
+
2. **No Database Queries**: Session context generation uses cached interaction contexts
|
| 200 |
+
3. **Cache Synchronization**: Cache updated immediately when database is updated
|
| 201 |
+
4. **Better Context Awareness**: Agents have access to session-level summary in addition to interaction-level summaries
|
| 202 |
+
5. **Consistent Pattern**: Session context follows same pattern as interaction context (generate → store → cache)
|
| 203 |
+
|
| 204 |
+
## Files Modified
|
| 205 |
+
|
| 206 |
+
1. `src/orchestrator_engine.py`:
|
| 207 |
+
- Added session context generation at every turn (lines 451-457)
|
| 208 |
+
- Updated `_build_context_summary()` to include session context (lines 655-659)
|
| 209 |
+
|
| 210 |
+
2. `src/context_manager.py`:
|
| 211 |
+
- Modified `generate_session_context()` to use cache (lines 470-485)
|
| 212 |
+
- Added `_update_cache_with_session_context()` method (lines 826-859)
|
| 213 |
+
- Updated `_optimize_context()` to include session context (lines 577-592)
|
| 214 |
+
- Updated `_retrieve_from_db()` to load session context (lines 707-725)
|
| 215 |
+
- Updated all return statements to include session_context field
|
| 216 |
+
|
| 217 |
+
3. `src/agents/intent_agent.py`:
|
| 218 |
+
- Added session context extraction and usage (lines 115-122)
|
| 219 |
+
|
| 220 |
+
4. `src/agents/skills_identification_agent.py`:
|
| 221 |
+
- Added session context extraction and usage (lines 230-236)
|
| 222 |
+
|
| 223 |
+
5. `src/agents/safety_agent.py`:
|
| 224 |
+
- Added session context extraction and usage (lines 158-164)
|
| 225 |
+
|
| 226 |
+
6. `src/agents/synthesis_agent.py`:
|
| 227 |
+
- Added session context extraction and usage (lines 552-561)
|
| 228 |
+
|
| 229 |
+
## Verification
|
| 230 |
+
|
| 231 |
+
All agents now:
|
| 232 |
+
- ✅ Receive context from orchestrator (cache-only)
|
| 233 |
+
- ✅ Have access to session_context from cache
|
| 234 |
+
- ✅ Include session context in their prompts/tasks
|
| 235 |
+
- ✅ Never query database directly (all context from cache)
|
| 236 |
+
|
src/agents/intent_agent.py
CHANGED
|
@@ -102,19 +102,24 @@ class IntentRecognitionAgent:
|
|
| 102 |
"""Build Chain of Thought prompt for intent recognition"""
|
| 103 |
|
| 104 |
# Extract context information from Context Manager structure
|
|
|
|
| 105 |
context_info = ""
|
| 106 |
if context:
|
| 107 |
-
# Use combined_context if available (pre-formatted by Context Manager)
|
| 108 |
combined_context = context.get('combined_context', '')
|
| 109 |
if combined_context:
|
| 110 |
-
# Use the pre-formatted context from Context Manager
|
| 111 |
context_info = f"\n\nAvailable Context:\n{combined_context[:1000]}..." # Truncate if too long
|
| 112 |
else:
|
| 113 |
-
# Fallback: Build from interaction_contexts
|
|
|
|
|
|
|
| 114 |
interaction_contexts = context.get('interaction_contexts', [])
|
| 115 |
user_context = context.get('user_context', '')
|
| 116 |
|
| 117 |
context_parts = []
|
|
|
|
|
|
|
| 118 |
if user_context:
|
| 119 |
context_parts.append(f"User Context: {user_context[:300]}...")
|
| 120 |
|
|
|
|
| 102 |
"""Build Chain of Thought prompt for intent recognition"""
|
| 103 |
|
| 104 |
# Extract context information from Context Manager structure
|
| 105 |
+
# Session context, user context, and interaction contexts are all from cache
|
| 106 |
context_info = ""
|
| 107 |
if context:
|
| 108 |
+
# Use combined_context if available (pre-formatted by Context Manager, includes session context)
|
| 109 |
combined_context = context.get('combined_context', '')
|
| 110 |
if combined_context:
|
| 111 |
+
# Use the pre-formatted context from Context Manager (includes session context)
|
| 112 |
context_info = f"\n\nAvailable Context:\n{combined_context[:1000]}..." # Truncate if too long
|
| 113 |
else:
|
| 114 |
+
# Fallback: Build from session_context, user_context, and interaction_contexts (all from cache)
|
| 115 |
+
session_context = context.get('session_context', {})
|
| 116 |
+
session_summary = session_context.get('summary', '') if isinstance(session_context, dict) else ""
|
| 117 |
interaction_contexts = context.get('interaction_contexts', [])
|
| 118 |
user_context = context.get('user_context', '')
|
| 119 |
|
| 120 |
context_parts = []
|
| 121 |
+
if session_summary:
|
| 122 |
+
context_parts.append(f"Session Context: {session_summary[:300]}...")
|
| 123 |
if user_context:
|
| 124 |
context_parts.append(f"User Context: {user_context[:300]}...")
|
| 125 |
|
src/agents/safety_agent.py
CHANGED
|
@@ -154,13 +154,18 @@ class SafetyCheckAgent:
|
|
| 154 |
# Extract relevant context information for safety analysis
|
| 155 |
context_info = ""
|
| 156 |
if context:
|
| 157 |
-
# Get user context
|
|
|
|
|
|
|
| 158 |
user_context = context.get('user_context', '')
|
|
|
|
|
|
|
|
|
|
|
|
|
| 159 |
if user_context:
|
| 160 |
-
context_info
|
| 161 |
|
| 162 |
# Optionally include recent interaction context to understand conversation flow
|
| 163 |
-
interaction_contexts = context.get('interaction_contexts', [])
|
| 164 |
if interaction_contexts:
|
| 165 |
recent_context = interaction_contexts[-1].get('summary', '') if interaction_contexts else ''
|
| 166 |
if recent_context:
|
|
|
|
| 154 |
# Extract relevant context information for safety analysis
|
| 155 |
context_info = ""
|
| 156 |
if context:
|
| 157 |
+
# Get session context, user context, and interaction contexts (all from cache)
|
| 158 |
+
session_context = context.get('session_context', {})
|
| 159 |
+
session_summary = session_context.get('summary', '') if isinstance(session_context, dict) else ""
|
| 160 |
user_context = context.get('user_context', '')
|
| 161 |
+
interaction_contexts = context.get('interaction_contexts', [])
|
| 162 |
+
|
| 163 |
+
if session_summary:
|
| 164 |
+
context_info = f"\n\nSession Context (for safety context): {session_summary[:200]}..."
|
| 165 |
if user_context:
|
| 166 |
+
context_info += f"\n\nUser Context (for safety context): {user_context[:200]}..."
|
| 167 |
|
| 168 |
# Optionally include recent interaction context to understand conversation flow
|
|
|
|
| 169 |
if interaction_contexts:
|
| 170 |
recent_context = interaction_contexts[-1].get('summary', '') if interaction_contexts else ''
|
| 171 |
if recent_context:
|
src/agents/skills_identification_agent.py
CHANGED
|
@@ -224,14 +224,18 @@ class SkillsIdentificationAgent:
|
|
| 224 |
for category, data in self.market_categories.items()
|
| 225 |
])
|
| 226 |
|
| 227 |
-
# Add context information if available
|
| 228 |
context_info = ""
|
| 229 |
if context:
|
|
|
|
|
|
|
| 230 |
user_context = context.get('user_context', '')
|
| 231 |
interaction_contexts = context.get('interaction_contexts', [])
|
| 232 |
|
|
|
|
|
|
|
| 233 |
if user_context:
|
| 234 |
-
context_info
|
| 235 |
|
| 236 |
if interaction_contexts:
|
| 237 |
# Include recent interaction context to understand topic continuity
|
|
|
|
| 224 |
for category, data in self.market_categories.items()
|
| 225 |
])
|
| 226 |
|
| 227 |
+
# Add context information if available (all from cache)
|
| 228 |
context_info = ""
|
| 229 |
if context:
|
| 230 |
+
session_context = context.get('session_context', {})
|
| 231 |
+
session_summary = session_context.get('summary', '') if isinstance(session_context, dict) else ""
|
| 232 |
user_context = context.get('user_context', '')
|
| 233 |
interaction_contexts = context.get('interaction_contexts', [])
|
| 234 |
|
| 235 |
+
if session_summary:
|
| 236 |
+
context_info = f"\n\nSession Context (session summary): {session_summary[:300]}..."
|
| 237 |
if user_context:
|
| 238 |
+
context_info += f"\n\nUser Context (persona summary): {user_context[:300]}..."
|
| 239 |
|
| 240 |
if interaction_contexts:
|
| 241 |
# Include recent interaction context to understand topic continuity
|
src/agents/synthesis_agent.py
CHANGED
|
@@ -540,18 +540,25 @@ Response:"""
|
|
| 540 |
return ""
|
| 541 |
|
| 542 |
# Prefer combined_context if available (pre-formatted by Context Manager)
|
|
|
|
| 543 |
combined_context = context.get('combined_context', '')
|
| 544 |
if combined_context:
|
| 545 |
# Use the pre-formatted context from Context Manager
|
| 546 |
-
# It already includes User Context and Interaction Contexts formatted
|
| 547 |
return f"\n\nConversation Context:\n{combined_context}"
|
| 548 |
|
| 549 |
# Fallback: Build from individual components if combined_context not available
|
|
|
|
|
|
|
|
|
|
| 550 |
interaction_contexts = context.get('interaction_contexts', [])
|
| 551 |
user_context = context.get('user_context', '')
|
| 552 |
|
| 553 |
context_section = ""
|
| 554 |
|
|
|
|
|
|
|
|
|
|
| 555 |
# Add user context if available
|
| 556 |
if user_context:
|
| 557 |
context_section += f"\n\nUser Context (Persona Summary):\n{user_context[:500]}...\n"
|
|
|
|
| 540 |
return ""
|
| 541 |
|
| 542 |
# Prefer combined_context if available (pre-formatted by Context Manager)
|
| 543 |
+
# combined_context includes Session Context, User Context, and Interaction Contexts
|
| 544 |
combined_context = context.get('combined_context', '')
|
| 545 |
if combined_context:
|
| 546 |
# Use the pre-formatted context from Context Manager
|
| 547 |
+
# It already includes Session Context, User Context, and Interaction Contexts formatted
|
| 548 |
return f"\n\nConversation Context:\n{combined_context}"
|
| 549 |
|
| 550 |
# Fallback: Build from individual components if combined_context not available
|
| 551 |
+
# All components are from cache
|
| 552 |
+
session_context = context.get('session_context', {})
|
| 553 |
+
session_summary = session_context.get('summary', '') if isinstance(session_context, dict) else ""
|
| 554 |
interaction_contexts = context.get('interaction_contexts', [])
|
| 555 |
user_context = context.get('user_context', '')
|
| 556 |
|
| 557 |
context_section = ""
|
| 558 |
|
| 559 |
+
# Add session context if available (from cache)
|
| 560 |
+
if session_summary:
|
| 561 |
+
context_section += f"\n\nSession Context (Session Summary):\n{session_summary[:500]}...\n"
|
| 562 |
# Add user context if available
|
| 563 |
if user_context:
|
| 564 |
context_section += f"\n\nUser Context (Persona Summary):\n{user_context[:500]}...\n"
|
src/context_manager.py
CHANGED
|
@@ -264,7 +264,8 @@ class EfficientContextManager:
|
|
| 264 |
# Cache session context (cache invalidation for user changes is handled in _retrieve_from_db)
|
| 265 |
self._warm_memory_cache(session_cache_key, session_context)
|
| 266 |
|
| 267 |
-
# Handle user context separately
|
|
|
|
| 268 |
if not user_context or not user_context.get("user_context_loaded"):
|
| 269 |
user_context_data = await self.get_user_context(user_id)
|
| 270 |
user_context = {
|
|
@@ -272,8 +273,12 @@ class EfficientContextManager:
|
|
| 272 |
"user_context_loaded": True,
|
| 273 |
"user_id": user_id
|
| 274 |
}
|
| 275 |
-
# Cache user context separately
|
| 276 |
self._warm_memory_cache(user_cache_key, user_context)
|
|
|
|
|
|
|
|
|
|
|
|
|
| 277 |
|
| 278 |
# Merge contexts without duplication
|
| 279 |
merged_context = {
|
|
@@ -423,6 +428,7 @@ Provide a brief summary capturing the key exchange."""
|
|
| 423 |
# Store in database
|
| 424 |
conn = sqlite3.connect(self.db_path)
|
| 425 |
cursor = conn.cursor()
|
|
|
|
| 426 |
cursor.execute("""
|
| 427 |
INSERT OR REPLACE INTO interaction_contexts
|
| 428 |
(interaction_id, session_id, user_input, system_response, interaction_summary, created_at)
|
|
@@ -433,12 +439,16 @@ Provide a brief summary capturing the key exchange."""
|
|
| 433 |
user_input[:500],
|
| 434 |
system_response[:1000],
|
| 435 |
summary.strip(),
|
| 436 |
-
|
| 437 |
))
|
| 438 |
conn.commit()
|
| 439 |
conn.close()
|
| 440 |
|
| 441 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 442 |
return summary.strip()
|
| 443 |
except Exception as e:
|
| 444 |
logger.error(f"Error generating interaction context: {e}", exc_info=True)
|
|
@@ -452,25 +462,30 @@ Provide a brief summary capturing the key exchange."""
|
|
| 452 |
|
| 453 |
async def generate_session_context(self, session_id: str, user_id: str = "Test_Any") -> str:
|
| 454 |
"""
|
| 455 |
-
|
| 456 |
-
|
|
|
|
| 457 |
"""
|
| 458 |
try:
|
| 459 |
-
|
| 460 |
-
|
|
|
|
| 461 |
|
| 462 |
-
|
| 463 |
-
|
| 464 |
-
|
| 465 |
-
WHERE session_id = ?
|
| 466 |
-
ORDER BY created_at ASC
|
| 467 |
-
""", (session_id,))
|
| 468 |
|
| 469 |
-
|
| 470 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 471 |
|
| 472 |
if not interaction_summaries:
|
| 473 |
-
logger.info(f"No
|
| 474 |
return ""
|
| 475 |
|
| 476 |
# Generate session summary using LLM (100 tokens)
|
|
@@ -499,17 +514,22 @@ Keep the summary concise (approximately 100 tokens)."""
|
|
| 499 |
|
| 500 |
if session_summary and isinstance(session_summary, str) and session_summary.strip():
|
| 501 |
# Store in database
|
|
|
|
| 502 |
conn = sqlite3.connect(self.db_path)
|
| 503 |
cursor = conn.cursor()
|
| 504 |
cursor.execute("""
|
| 505 |
INSERT OR REPLACE INTO session_contexts
|
| 506 |
(session_id, user_id, session_summary, created_at)
|
| 507 |
VALUES (?, ?, ?, ?)
|
| 508 |
-
""", (session_id, user_id, session_summary.strip(),
|
| 509 |
conn.commit()
|
| 510 |
conn.close()
|
| 511 |
|
| 512 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 513 |
return session_summary.strip()
|
| 514 |
except Exception as e:
|
| 515 |
logger.error(f"Error generating session context: {e}", exc_info=True)
|
|
@@ -523,12 +543,11 @@ Keep the summary concise (approximately 100 tokens)."""
|
|
| 523 |
|
| 524 |
async def end_session(self, session_id: str, user_id: str = "Test_Any"):
|
| 525 |
"""
|
| 526 |
-
|
|
|
|
| 527 |
"""
|
| 528 |
try:
|
| 529 |
-
#
|
| 530 |
-
await self.generate_session_context(session_id, user_id)
|
| 531 |
-
|
| 532 |
# Clear in-memory cache for this session (session-only key)
|
| 533 |
session_cache_key = f"session_{session_id}"
|
| 534 |
if session_cache_key in self.session_cache:
|
|
@@ -550,18 +569,22 @@ Keep the summary concise (approximately 100 tokens)."""
|
|
| 550 |
def _optimize_context(self, context: dict) -> dict:
|
| 551 |
"""
|
| 552 |
Optimize context for LLM consumption
|
| 553 |
-
Format: [Interaction Context #N, #N-1, ...]
|
| 554 |
"""
|
| 555 |
user_context = context.get("user_context", "")
|
| 556 |
interaction_contexts = context.get("interaction_contexts", [])
|
|
|
|
|
|
|
| 557 |
|
| 558 |
# Format interaction contexts as requested
|
| 559 |
formatted_interactions = []
|
| 560 |
for idx, ic in enumerate(interaction_contexts[:10]): # Last 10 interactions
|
| 561 |
formatted_interactions.append(f"[Interaction Context #{len(interaction_contexts) - idx}]\n{ic.get('summary', '')}")
|
| 562 |
|
| 563 |
-
# Combine User Context + Interaction Contexts
|
| 564 |
combined_context = ""
|
|
|
|
|
|
|
| 565 |
if user_context:
|
| 566 |
combined_context += f"[User Context]\n{user_context}\n\n"
|
| 567 |
if formatted_interactions:
|
|
@@ -571,6 +594,7 @@ Keep the summary concise (approximately 100 tokens)."""
|
|
| 571 |
"session_id": context.get("session_id"),
|
| 572 |
"user_id": context.get("user_id", "Test_Any"),
|
| 573 |
"user_context": user_context,
|
|
|
|
| 574 |
"interaction_contexts": interaction_contexts,
|
| 575 |
"combined_context": combined_context, # For direct use in prompts
|
| 576 |
"preferences": context.get("preferences", {}),
|
|
@@ -684,10 +708,31 @@ Keep the summary concise (approximately 100 tokens)."""
|
|
| 684 |
"timestamp": timestamp
|
| 685 |
})
|
| 686 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 687 |
context = {
|
| 688 |
"session_id": session_id,
|
| 689 |
"user_id": user_id,
|
| 690 |
"interaction_contexts": interaction_contexts,
|
|
|
|
| 691 |
"preferences": user_metadata.get("preferences", {}),
|
| 692 |
"active_tasks": user_metadata.get("active_tasks", []),
|
| 693 |
"last_activity": last_activity,
|
|
@@ -711,6 +756,7 @@ Keep the summary concise (approximately 100 tokens)."""
|
|
| 711 |
"session_id": session_id,
|
| 712 |
"user_id": user_id,
|
| 713 |
"interaction_contexts": [],
|
|
|
|
| 714 |
"preferences": {},
|
| 715 |
"active_tasks": [],
|
| 716 |
"user_context_loaded": False,
|
|
@@ -730,6 +776,7 @@ Keep the summary concise (approximately 100 tokens)."""
|
|
| 730 |
"session_id": session_id,
|
| 731 |
"user_id": user_id,
|
| 732 |
"interaction_contexts": [],
|
|
|
|
| 733 |
"preferences": {},
|
| 734 |
"active_tasks": [],
|
| 735 |
"user_context_loaded": False,
|
|
@@ -749,6 +796,7 @@ Keep the summary concise (approximately 100 tokens)."""
|
|
| 749 |
"session_id": session_id,
|
| 750 |
"user_id": user_id,
|
| 751 |
"interaction_contexts": [],
|
|
|
|
| 752 |
"preferences": {},
|
| 753 |
"active_tasks": [],
|
| 754 |
"user_context_loaded": False,
|
|
@@ -762,6 +810,82 @@ Keep the summary concise (approximately 100 tokens)."""
|
|
| 762 |
"""
|
| 763 |
self.session_cache[cache_key] = context
|
| 764 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 765 |
def _update_context(self, context: dict, user_input: str, response: str = None, user_id: str = "Test_Any") -> dict:
|
| 766 |
"""
|
| 767 |
Update context with deduplication and idempotency checks
|
|
@@ -1042,6 +1166,16 @@ Keep the summary concise (approximately 100 tokens)."""
|
|
| 1042 |
except:
|
| 1043 |
return False
|
| 1044 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1045 |
def optimize_database_indexes(self):
|
| 1046 |
"""Create database indexes for better query performance"""
|
| 1047 |
try:
|
|
|
|
| 264 |
# Cache session context (cache invalidation for user changes is handled in _retrieve_from_db)
|
| 265 |
self._warm_memory_cache(session_cache_key, session_context)
|
| 266 |
|
| 267 |
+
# Handle user context separately - load only once and cache thereafter
|
| 268 |
+
# Cache does not refer to database after initial load
|
| 269 |
if not user_context or not user_context.get("user_context_loaded"):
|
| 270 |
user_context_data = await self.get_user_context(user_id)
|
| 271 |
user_context = {
|
|
|
|
| 273 |
"user_context_loaded": True,
|
| 274 |
"user_id": user_id
|
| 275 |
}
|
| 276 |
+
# Cache user context separately - this is the only database query for user context
|
| 277 |
self._warm_memory_cache(user_cache_key, user_context)
|
| 278 |
+
logger.debug(f"User context loaded once for {user_id} and cached")
|
| 279 |
+
else:
|
| 280 |
+
# User context already cached, use it without database query
|
| 281 |
+
logger.debug(f"Using cached user context for {user_id}")
|
| 282 |
|
| 283 |
# Merge contexts without duplication
|
| 284 |
merged_context = {
|
|
|
|
| 428 |
# Store in database
|
| 429 |
conn = sqlite3.connect(self.db_path)
|
| 430 |
cursor = conn.cursor()
|
| 431 |
+
created_at = datetime.now().isoformat()
|
| 432 |
cursor.execute("""
|
| 433 |
INSERT OR REPLACE INTO interaction_contexts
|
| 434 |
(interaction_id, session_id, user_input, system_response, interaction_summary, created_at)
|
|
|
|
| 439 |
user_input[:500],
|
| 440 |
system_response[:1000],
|
| 441 |
summary.strip(),
|
| 442 |
+
created_at
|
| 443 |
))
|
| 444 |
conn.commit()
|
| 445 |
conn.close()
|
| 446 |
|
| 447 |
+
# Update cache immediately with new interaction context
|
| 448 |
+
# This ensures cache is synchronized with database at the same time
|
| 449 |
+
self._update_cache_with_interaction_context(session_id, summary.strip(), created_at)
|
| 450 |
+
|
| 451 |
+
logger.info(f"✓ Generated interaction context for {interaction_id} and updated cache")
|
| 452 |
return summary.strip()
|
| 453 |
except Exception as e:
|
| 454 |
logger.error(f"Error generating interaction context: {e}", exc_info=True)
|
|
|
|
| 462 |
|
| 463 |
async def generate_session_context(self, session_id: str, user_id: str = "Test_Any") -> str:
|
| 464 |
"""
|
| 465 |
+
Generate Session Context (100-token summary) at every turn
|
| 466 |
+
Uses cached interaction contexts instead of querying database
|
| 467 |
+
Updates both database and cache immediately
|
| 468 |
"""
|
| 469 |
try:
|
| 470 |
+
# Get interaction contexts from cache (no database query)
|
| 471 |
+
session_cache_key = f"session_{session_id}"
|
| 472 |
+
cached_context = self.session_cache.get(session_cache_key)
|
| 473 |
|
| 474 |
+
if not cached_context:
|
| 475 |
+
logger.warning(f"No cached context found for session {session_id}, cannot generate session context")
|
| 476 |
+
return ""
|
|
|
|
|
|
|
|
|
|
| 477 |
|
| 478 |
+
interaction_contexts = cached_context.get('interaction_contexts', [])
|
| 479 |
+
|
| 480 |
+
if not interaction_contexts:
|
| 481 |
+
logger.info(f"No interaction contexts available for session {session_id} to summarize")
|
| 482 |
+
return ""
|
| 483 |
+
|
| 484 |
+
# Use cached interaction contexts (from cache, not database)
|
| 485 |
+
interaction_summaries = [ic.get('summary', '') for ic in interaction_contexts if ic.get('summary')]
|
| 486 |
|
| 487 |
if not interaction_summaries:
|
| 488 |
+
logger.info(f"No interaction summaries available for session {session_id}")
|
| 489 |
return ""
|
| 490 |
|
| 491 |
# Generate session summary using LLM (100 tokens)
|
|
|
|
| 514 |
|
| 515 |
if session_summary and isinstance(session_summary, str) and session_summary.strip():
|
| 516 |
# Store in database
|
| 517 |
+
created_at = datetime.now().isoformat()
|
| 518 |
conn = sqlite3.connect(self.db_path)
|
| 519 |
cursor = conn.cursor()
|
| 520 |
cursor.execute("""
|
| 521 |
INSERT OR REPLACE INTO session_contexts
|
| 522 |
(session_id, user_id, session_summary, created_at)
|
| 523 |
VALUES (?, ?, ?, ?)
|
| 524 |
+
""", (session_id, user_id, session_summary.strip(), created_at))
|
| 525 |
conn.commit()
|
| 526 |
conn.close()
|
| 527 |
|
| 528 |
+
# Update cache immediately with new session context
|
| 529 |
+
# This ensures cache is synchronized with database at the same time
|
| 530 |
+
self._update_cache_with_session_context(session_id, session_summary.strip(), created_at)
|
| 531 |
+
|
| 532 |
+
logger.info(f"✓ Generated session context for {session_id} and updated cache")
|
| 533 |
return session_summary.strip()
|
| 534 |
except Exception as e:
|
| 535 |
logger.error(f"Error generating session context: {e}", exc_info=True)
|
|
|
|
| 543 |
|
| 544 |
async def end_session(self, session_id: str, user_id: str = "Test_Any"):
|
| 545 |
"""
|
| 546 |
+
End session and clear cache
|
| 547 |
+
Note: Session context is already generated at every turn, so this just clears cache
|
| 548 |
"""
|
| 549 |
try:
|
| 550 |
+
# Session context is already generated at every turn (no need to regenerate)
|
|
|
|
|
|
|
| 551 |
# Clear in-memory cache for this session (session-only key)
|
| 552 |
session_cache_key = f"session_{session_id}"
|
| 553 |
if session_cache_key in self.session_cache:
|
|
|
|
| 569 |
def _optimize_context(self, context: dict) -> dict:
|
| 570 |
"""
|
| 571 |
Optimize context for LLM consumption
|
| 572 |
+
Format: [Session Context] + [User Context] + [Interaction Context #N, #N-1, ...]
|
| 573 |
"""
|
| 574 |
user_context = context.get("user_context", "")
|
| 575 |
interaction_contexts = context.get("interaction_contexts", [])
|
| 576 |
+
session_context = context.get("session_context", {})
|
| 577 |
+
session_summary = session_context.get("summary", "") if isinstance(session_context, dict) else ""
|
| 578 |
|
| 579 |
# Format interaction contexts as requested
|
| 580 |
formatted_interactions = []
|
| 581 |
for idx, ic in enumerate(interaction_contexts[:10]): # Last 10 interactions
|
| 582 |
formatted_interactions.append(f"[Interaction Context #{len(interaction_contexts) - idx}]\n{ic.get('summary', '')}")
|
| 583 |
|
| 584 |
+
# Combine Session Context + User Context + Interaction Contexts
|
| 585 |
combined_context = ""
|
| 586 |
+
if session_summary:
|
| 587 |
+
combined_context += f"[Session Context]\n{session_summary}\n\n"
|
| 588 |
if user_context:
|
| 589 |
combined_context += f"[User Context]\n{user_context}\n\n"
|
| 590 |
if formatted_interactions:
|
|
|
|
| 594 |
"session_id": context.get("session_id"),
|
| 595 |
"user_id": context.get("user_id", "Test_Any"),
|
| 596 |
"user_context": user_context,
|
| 597 |
+
"session_context": session_context,
|
| 598 |
"interaction_contexts": interaction_contexts,
|
| 599 |
"combined_context": combined_context, # For direct use in prompts
|
| 600 |
"preferences": context.get("preferences", {}),
|
|
|
|
| 708 |
"timestamp": timestamp
|
| 709 |
})
|
| 710 |
|
| 711 |
+
# Get session context from database
|
| 712 |
+
session_context_data = None
|
| 713 |
+
try:
|
| 714 |
+
cursor.execute("""
|
| 715 |
+
SELECT session_summary, created_at
|
| 716 |
+
FROM session_contexts
|
| 717 |
+
WHERE session_id = ?
|
| 718 |
+
ORDER BY created_at DESC
|
| 719 |
+
LIMIT 1
|
| 720 |
+
""", (session_id,))
|
| 721 |
+
sc_row = cursor.fetchone()
|
| 722 |
+
if sc_row and sc_row[0]:
|
| 723 |
+
session_context_data = {
|
| 724 |
+
"summary": sc_row[0],
|
| 725 |
+
"timestamp": sc_row[1]
|
| 726 |
+
}
|
| 727 |
+
except sqlite3.OperationalError:
|
| 728 |
+
# Table might not exist yet
|
| 729 |
+
pass
|
| 730 |
+
|
| 731 |
context = {
|
| 732 |
"session_id": session_id,
|
| 733 |
"user_id": user_id,
|
| 734 |
"interaction_contexts": interaction_contexts,
|
| 735 |
+
"session_context": session_context_data,
|
| 736 |
"preferences": user_metadata.get("preferences", {}),
|
| 737 |
"active_tasks": user_metadata.get("active_tasks", []),
|
| 738 |
"last_activity": last_activity,
|
|
|
|
| 756 |
"session_id": session_id,
|
| 757 |
"user_id": user_id,
|
| 758 |
"interaction_contexts": [],
|
| 759 |
+
"session_context": None,
|
| 760 |
"preferences": {},
|
| 761 |
"active_tasks": [],
|
| 762 |
"user_context_loaded": False,
|
|
|
|
| 776 |
"session_id": session_id,
|
| 777 |
"user_id": user_id,
|
| 778 |
"interaction_contexts": [],
|
| 779 |
+
"session_context": None,
|
| 780 |
"preferences": {},
|
| 781 |
"active_tasks": [],
|
| 782 |
"user_context_loaded": False,
|
|
|
|
| 796 |
"session_id": session_id,
|
| 797 |
"user_id": user_id,
|
| 798 |
"interaction_contexts": [],
|
| 799 |
+
"session_context": None,
|
| 800 |
"preferences": {},
|
| 801 |
"active_tasks": [],
|
| 802 |
"user_context_loaded": False,
|
|
|
|
| 810 |
"""
|
| 811 |
self.session_cache[cache_key] = context
|
| 812 |
|
| 813 |
+
def _update_cache_with_interaction_context(self, session_id: str, interaction_summary: str, created_at: str):
|
| 814 |
+
"""
|
| 815 |
+
Update cache with new interaction context immediately after database update
|
| 816 |
+
This keeps cache synchronized with database without requiring database queries
|
| 817 |
+
"""
|
| 818 |
+
session_cache_key = f"session_{session_id}"
|
| 819 |
+
|
| 820 |
+
# Get current cached context if it exists
|
| 821 |
+
cached_context = self.session_cache.get(session_cache_key)
|
| 822 |
+
|
| 823 |
+
if cached_context:
|
| 824 |
+
# Add new interaction context to the beginning of the list (most recent first)
|
| 825 |
+
interaction_contexts = cached_context.get('interaction_contexts', [])
|
| 826 |
+
new_interaction = {
|
| 827 |
+
"summary": interaction_summary,
|
| 828 |
+
"timestamp": created_at
|
| 829 |
+
}
|
| 830 |
+
# Insert at beginning and keep only last 20 (matches DB query limit)
|
| 831 |
+
interaction_contexts.insert(0, new_interaction)
|
| 832 |
+
interaction_contexts = interaction_contexts[:20]
|
| 833 |
+
|
| 834 |
+
# Update cached context with new interaction contexts
|
| 835 |
+
cached_context['interaction_contexts'] = interaction_contexts
|
| 836 |
+
self.session_cache[session_cache_key] = cached_context
|
| 837 |
+
|
| 838 |
+
logger.debug(f"Cache updated with new interaction context for session {session_id} (total: {len(interaction_contexts)})")
|
| 839 |
+
else:
|
| 840 |
+
# If cache doesn't exist, create new entry
|
| 841 |
+
new_context = {
|
| 842 |
+
"session_id": session_id,
|
| 843 |
+
"interaction_contexts": [{
|
| 844 |
+
"summary": interaction_summary,
|
| 845 |
+
"timestamp": created_at
|
| 846 |
+
}],
|
| 847 |
+
"preferences": {},
|
| 848 |
+
"active_tasks": [],
|
| 849 |
+
"user_context_loaded": False
|
| 850 |
+
}
|
| 851 |
+
self.session_cache[session_cache_key] = new_context
|
| 852 |
+
logger.debug(f"Created new cache entry with interaction context for session {session_id}")
|
| 853 |
+
|
| 854 |
+
def _update_cache_with_session_context(self, session_id: str, session_summary: str, created_at: str):
|
| 855 |
+
"""
|
| 856 |
+
Update cache with new session context immediately after database update
|
| 857 |
+
This keeps cache synchronized with database without requiring database queries
|
| 858 |
+
"""
|
| 859 |
+
session_cache_key = f"session_{session_id}"
|
| 860 |
+
|
| 861 |
+
# Get current cached context if it exists
|
| 862 |
+
cached_context = self.session_cache.get(session_cache_key)
|
| 863 |
+
|
| 864 |
+
if cached_context:
|
| 865 |
+
# Update session context in cache
|
| 866 |
+
cached_context['session_context'] = {
|
| 867 |
+
"summary": session_summary,
|
| 868 |
+
"timestamp": created_at
|
| 869 |
+
}
|
| 870 |
+
self.session_cache[session_cache_key] = cached_context
|
| 871 |
+
|
| 872 |
+
logger.debug(f"Cache updated with new session context for session {session_id}")
|
| 873 |
+
else:
|
| 874 |
+
# If cache doesn't exist, create new entry
|
| 875 |
+
new_context = {
|
| 876 |
+
"session_id": session_id,
|
| 877 |
+
"session_context": {
|
| 878 |
+
"summary": session_summary,
|
| 879 |
+
"timestamp": created_at
|
| 880 |
+
},
|
| 881 |
+
"interaction_contexts": [],
|
| 882 |
+
"preferences": {},
|
| 883 |
+
"active_tasks": [],
|
| 884 |
+
"user_context_loaded": False
|
| 885 |
+
}
|
| 886 |
+
self.session_cache[session_cache_key] = new_context
|
| 887 |
+
logger.debug(f"Created new cache entry with session context for session {session_id}")
|
| 888 |
+
|
| 889 |
def _update_context(self, context: dict, user_input: str, response: str = None, user_id: str = "Test_Any") -> dict:
|
| 890 |
"""
|
| 891 |
Update context with deduplication and idempotency checks
|
|
|
|
| 1166 |
except:
|
| 1167 |
return False
|
| 1168 |
|
| 1169 |
+
def invalidate_session_cache(self, session_id: str):
|
| 1170 |
+
"""
|
| 1171 |
+
Invalidate cached context for a session to force fresh retrieval
|
| 1172 |
+
Only affects cache management - does not change application functionality
|
| 1173 |
+
"""
|
| 1174 |
+
session_cache_key = f"session_{session_id}"
|
| 1175 |
+
if session_cache_key in self.session_cache:
|
| 1176 |
+
del self.session_cache[session_cache_key]
|
| 1177 |
+
logger.info(f"Cache invalidated for session {session_id} to ensure fresh context retrieval")
|
| 1178 |
+
|
| 1179 |
def optimize_database_indexes(self):
|
| 1180 |
"""Create database indexes for better query performance"""
|
| 1181 |
try:
|
src/orchestrator_engine.py
CHANGED
|
@@ -31,6 +31,9 @@ class MVPOrchestrator:
|
|
| 31 |
self.context_manager = context_manager
|
| 32 |
self.agents = agents
|
| 33 |
self.execution_trace = []
|
|
|
|
|
|
|
|
|
|
| 34 |
|
| 35 |
# Safety revision thresholds
|
| 36 |
self.safety_thresholds = {
|
|
@@ -171,24 +174,29 @@ class MVPOrchestrator:
|
|
| 171 |
# Use context with deduplication check
|
| 172 |
context = await self._get_or_create_context(session_id, user_input, user_id)
|
| 173 |
|
| 174 |
-
logger.info(f"Context retrieved: {len(context.get('interaction_contexts', []))} interaction contexts")
|
| 175 |
-
|
| 176 |
-
# Add context analysis to reasoning chain
|
| 177 |
interaction_contexts_count = len(context.get('interaction_contexts', []))
|
|
|
|
|
|
|
|
|
|
| 178 |
user_context = context.get('user_context', '')
|
| 179 |
has_user_context = bool(user_context)
|
| 180 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 181 |
reasoning_chain["chain_of_thought"]["step_1"] = {
|
| 182 |
-
"hypothesis": f"User is asking about: '{
|
| 183 |
"evidence": [
|
| 184 |
f"Previous interaction contexts: {interaction_contexts_count}",
|
| 185 |
f"User context available: {has_user_context}",
|
| 186 |
f"Session duration: {self._calculate_session_duration(context)}",
|
| 187 |
-
f"Topic continuity: {
|
| 188 |
-
f"Query keywords: {
|
| 189 |
],
|
| 190 |
"confidence": 0.85,
|
| 191 |
-
"reasoning": f"Context analysis shows user is focused on {
|
| 192 |
}
|
| 193 |
|
| 194 |
# Step 3: Intent recognition with enhanced CoT
|
|
@@ -235,12 +243,12 @@ class MVPOrchestrator:
|
|
| 235 |
f"Confidence score: {skills_result.get('confidence_score', 0.5)}"
|
| 236 |
],
|
| 237 |
"confidence": skills_result.get('confidence_score', 0.5),
|
| 238 |
-
"reasoning": f"Skills identification completed for topic '{
|
| 239 |
}
|
| 240 |
|
| 241 |
# Add intent reasoning to chain
|
| 242 |
reasoning_chain["chain_of_thought"]["step_2"] = {
|
| 243 |
-
"hypothesis": f"User intent is '{intent_result.get('primary_intent', 'unknown')}' for topic '{
|
| 244 |
"evidence": [
|
| 245 |
f"Pattern analysis: {self._extract_pattern_evidence(user_input)}",
|
| 246 |
f"Confidence scores: {intent_result.get('confidence_scores', {})}",
|
|
@@ -248,7 +256,7 @@ class MVPOrchestrator:
|
|
| 248 |
f"Query complexity: {self._assess_query_complexity(user_input)}"
|
| 249 |
],
|
| 250 |
"confidence": intent_result.get('confidence_scores', {}).get(intent_result.get('primary_intent', 'unknown'), 0.7),
|
| 251 |
-
"reasoning": f"Intent '{intent_result.get('primary_intent', 'unknown')}' detected for {
|
| 252 |
}
|
| 253 |
|
| 254 |
# Step 4: Agent execution planning with reasoning
|
|
@@ -257,7 +265,7 @@ class MVPOrchestrator:
|
|
| 257 |
|
| 258 |
# Add execution planning reasoning
|
| 259 |
reasoning_chain["chain_of_thought"]["step_3"] = {
|
| 260 |
-
"hypothesis": f"Optimal approach for '{intent_result.get('primary_intent', 'unknown')}' intent on '{
|
| 261 |
"evidence": [
|
| 262 |
f"Intent complexity: {self._assess_intent_complexity(intent_result)}",
|
| 263 |
f"Required agents: {execution_plan.get('agents_to_execute', [])}",
|
|
@@ -265,7 +273,7 @@ class MVPOrchestrator:
|
|
| 265 |
f"Response scope: {self._determine_response_scope(user_input)}"
|
| 266 |
],
|
| 267 |
"confidence": 0.80,
|
| 268 |
-
"reasoning": f"Agent selection optimized for {intent_result.get('primary_intent', 'unknown')} intent regarding {
|
| 269 |
}
|
| 270 |
|
| 271 |
# Step 5: Parallel agent execution
|
|
@@ -293,7 +301,7 @@ class MVPOrchestrator:
|
|
| 293 |
|
| 294 |
# Add synthesis reasoning
|
| 295 |
reasoning_chain["chain_of_thought"]["step_4"] = {
|
| 296 |
-
"hypothesis": f"Response synthesis for '{
|
| 297 |
"evidence": [
|
| 298 |
f"Synthesis quality: {final_response.get('coherence_score', 0.7)}",
|
| 299 |
f"Source integration: {len(final_response.get('source_references', []))} sources",
|
|
@@ -301,7 +309,7 @@ class MVPOrchestrator:
|
|
| 301 |
f"Content relevance: {self._assess_content_relevance(user_input, final_response)}"
|
| 302 |
],
|
| 303 |
"confidence": final_response.get('coherence_score', 0.7),
|
| 304 |
-
"reasoning": f"Multi-source synthesis for {
|
| 305 |
}
|
| 306 |
|
| 307 |
# Step 7: Safety and bias check with reasoning
|
|
@@ -373,7 +381,7 @@ This response has been flagged for potential safety concerns:
|
|
| 373 |
|
| 374 |
# Add safety reasoning
|
| 375 |
reasoning_chain["chain_of_thought"]["step_5"] = {
|
| 376 |
-
"hypothesis": f"Safety validation for response about '{
|
| 377 |
"evidence": [
|
| 378 |
f"Safety score: {safety_checked.get('safety_analysis', {}).get('overall_safety_score', 0.8)}",
|
| 379 |
f"Warnings generated: {len(safety_checked.get('warnings', []))}",
|
|
@@ -381,7 +389,7 @@ This response has been flagged for potential safety concerns:
|
|
| 381 |
f"Content appropriateness: {self._assess_content_appropriateness(user_input, safety_checked)}"
|
| 382 |
],
|
| 383 |
"confidence": safety_checked.get('safety_analysis', {}).get('overall_safety_score', 0.8),
|
| 384 |
-
"reasoning": f"Safety analysis for {
|
| 385 |
}
|
| 386 |
|
| 387 |
# Update final_response to use the response_content (which may have warnings appended)
|
|
@@ -392,7 +400,7 @@ This response has been flagged for potential safety concerns:
|
|
| 392 |
final_response['response'] = response_content
|
| 393 |
|
| 394 |
# Generate alternative paths and uncertainty analysis
|
| 395 |
-
reasoning_chain["alternative_paths"] = self._generate_alternative_paths(intent_result, user_input)
|
| 396 |
reasoning_chain["uncertainty_areas"] = self._identify_uncertainty_areas(intent_result, final_response, safety_checked)
|
| 397 |
reasoning_chain["evidence_sources"] = self._extract_evidence_sources(intent_result, final_response, context)
|
| 398 |
reasoning_chain["confidence_calibration"] = self._calibrate_confidence_scores(reasoning_chain)
|
|
@@ -446,6 +454,22 @@ This response has been flagged for potential safety concerns:
|
|
| 446 |
system_response=response_text,
|
| 447 |
user_id=user_id
|
| 448 |
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 449 |
except Exception as e:
|
| 450 |
logger.error(f"Error generating interaction context: {e}", exc_info=True)
|
| 451 |
|
|
@@ -633,17 +657,23 @@ This response has been flagged for potential safety concerns:
|
|
| 633 |
return results
|
| 634 |
|
| 635 |
def _build_context_summary(self, context: dict) -> str:
|
| 636 |
-
"""Build a concise summary of context for task execution"""
|
| 637 |
summary_parts = []
|
| 638 |
|
| 639 |
-
# Extract
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 640 |
interaction_contexts = context.get('interaction_contexts', [])
|
| 641 |
if interaction_contexts:
|
| 642 |
recent_summaries = [ic.get('summary', '') for ic in interaction_contexts[-3:]]
|
| 643 |
if recent_summaries:
|
| 644 |
summary_parts.append(f"Recent conversation topics: {', '.join(recent_summaries)}")
|
| 645 |
|
| 646 |
-
# Extract user context
|
| 647 |
user_context = context.get('user_context', '')
|
| 648 |
if user_context:
|
| 649 |
summary_parts.append(f"User background: {user_context[:200]}")
|
|
@@ -1001,37 +1031,72 @@ Please revise the response to address these concerns while maintaining helpfulne
|
|
| 1001 |
else:
|
| 1002 |
return "Long session (> 20 interactions)"
|
| 1003 |
|
| 1004 |
-
def _analyze_topic_continuity(self, context: dict, user_input: str) -> str:
|
| 1005 |
-
"""Analyze topic continuity
|
| 1006 |
-
|
| 1007 |
-
|
| 1008 |
-
|
| 1009 |
-
|
| 1010 |
-
|
| 1011 |
-
|
| 1012 |
-
|
| 1013 |
-
|
| 1014 |
-
|
| 1015 |
-
|
| 1016 |
-
|
| 1017 |
-
|
| 1018 |
-
|
| 1019 |
-
|
| 1020 |
-
|
| 1021 |
-
|
| 1022 |
-
|
| 1023 |
-
|
| 1024 |
-
|
| 1025 |
-
|
| 1026 |
-
|
| 1027 |
-
|
| 1028 |
-
|
| 1029 |
-
|
| 1030 |
-
|
| 1031 |
-
|
| 1032 |
-
|
| 1033 |
-
|
| 1034 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1035 |
|
| 1036 |
def _extract_pattern_evidence(self, user_input: str) -> str:
|
| 1037 |
"""Extract pattern evidence for intent reasoning"""
|
|
@@ -1068,11 +1133,10 @@ Please revise the response to address these concerns while maintaining helpfulne
|
|
| 1068 |
else:
|
| 1069 |
return "Complex, multi-faceted intent"
|
| 1070 |
|
| 1071 |
-
def _generate_alternative_paths(self, intent_result: dict, user_input: str) -> list:
|
| 1072 |
"""Generate alternative reasoning paths based on actual content"""
|
| 1073 |
primary_intent = intent_result.get('primary_intent', 'unknown')
|
| 1074 |
secondary_intents = intent_result.get('secondary_intents', [])
|
| 1075 |
-
main_topic = self._extract_main_topic(user_input)
|
| 1076 |
|
| 1077 |
alternative_paths = []
|
| 1078 |
|
|
@@ -1213,55 +1277,92 @@ Please revise the response to address these concerns while maintaining helpfulne
|
|
| 1213 |
"calibration_method": "Weighted average of step confidences"
|
| 1214 |
}
|
| 1215 |
|
| 1216 |
-
def _extract_main_topic(self, user_input: str) -> str:
|
| 1217 |
-
"""Extract the main topic
|
| 1218 |
-
|
| 1219 |
-
|
| 1220 |
-
|
| 1221 |
-
|
| 1222 |
-
if
|
| 1223 |
-
|
| 1224 |
-
|
| 1225 |
-
|
| 1226 |
-
|
| 1227 |
-
|
| 1228 |
-
|
| 1229 |
-
|
| 1230 |
-
|
| 1231 |
-
|
| 1232 |
-
|
| 1233 |
-
|
| 1234 |
-
|
| 1235 |
-
|
| 1236 |
-
|
| 1237 |
-
|
| 1238 |
-
|
| 1239 |
-
|
| 1240 |
-
|
| 1241 |
-
|
| 1242 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1243 |
words = user_input.split()[:4]
|
| 1244 |
return " ".join(words) if words else "General inquiry"
|
| 1245 |
|
| 1246 |
-
def _extract_keywords(self, user_input: str) -> str:
|
| 1247 |
-
"""Extract key terms
|
| 1248 |
-
|
| 1249 |
-
|
| 1250 |
-
|
| 1251 |
-
|
| 1252 |
-
|
| 1253 |
-
'
|
| 1254 |
-
|
| 1255 |
-
|
| 1256 |
-
|
| 1257 |
-
|
| 1258 |
-
|
| 1259 |
-
|
| 1260 |
-
|
| 1261 |
-
|
| 1262 |
-
|
| 1263 |
-
|
| 1264 |
-
return ", ".join(keywords[:5]) if keywords else "General terms"
|
| 1265 |
|
| 1266 |
def _assess_query_complexity(self, user_input: str) -> str:
|
| 1267 |
"""Assess the complexity of the user query"""
|
|
@@ -1627,7 +1728,9 @@ Revised Response:"""
|
|
| 1627 |
"complex_refinement": "add clarifying details to your existing question"
|
| 1628 |
})
|
| 1629 |
|
| 1630 |
-
|
|
|
|
|
|
|
| 1631 |
|
| 1632 |
# Adaptive guidance based on input complexity
|
| 1633 |
if input_complexity["is_complex"]:
|
|
|
|
| 31 |
self.context_manager = context_manager
|
| 32 |
self.agents = agents
|
| 33 |
self.execution_trace = []
|
| 34 |
+
# Cache for topic extraction to reduce API calls
|
| 35 |
+
self._topic_cache = {}
|
| 36 |
+
self._topic_cache_max_size = 100 # Limit cache size
|
| 37 |
|
| 38 |
# Safety revision thresholds
|
| 39 |
self.safety_thresholds = {
|
|
|
|
| 174 |
# Use context with deduplication check
|
| 175 |
context = await self._get_or_create_context(session_id, user_input, user_id)
|
| 176 |
|
|
|
|
|
|
|
|
|
|
| 177 |
interaction_contexts_count = len(context.get('interaction_contexts', []))
|
| 178 |
+
logger.info(f"Context retrieved: {interaction_contexts_count} interaction contexts")
|
| 179 |
+
|
| 180 |
+
# Add context analysis to reasoning chain (using LLM-based topic extraction)
|
| 181 |
user_context = context.get('user_context', '')
|
| 182 |
has_user_context = bool(user_context)
|
| 183 |
|
| 184 |
+
# Extract topic and keywords using LLM (async)
|
| 185 |
+
main_topic = await self._extract_main_topic(user_input, context)
|
| 186 |
+
topic_continuity = await self._analyze_topic_continuity(context, user_input)
|
| 187 |
+
query_keywords = await self._extract_keywords(user_input)
|
| 188 |
+
|
| 189 |
reasoning_chain["chain_of_thought"]["step_1"] = {
|
| 190 |
+
"hypothesis": f"User is asking about: '{main_topic}'",
|
| 191 |
"evidence": [
|
| 192 |
f"Previous interaction contexts: {interaction_contexts_count}",
|
| 193 |
f"User context available: {has_user_context}",
|
| 194 |
f"Session duration: {self._calculate_session_duration(context)}",
|
| 195 |
+
f"Topic continuity: {topic_continuity}",
|
| 196 |
+
f"Query keywords: {query_keywords}"
|
| 197 |
],
|
| 198 |
"confidence": 0.85,
|
| 199 |
+
"reasoning": f"Context analysis shows user is focused on {main_topic} with {interaction_contexts_count} previous interaction contexts and {'existing' if has_user_context else 'new'} user context"
|
| 200 |
}
|
| 201 |
|
| 202 |
# Step 3: Intent recognition with enhanced CoT
|
|
|
|
| 243 |
f"Confidence score: {skills_result.get('confidence_score', 0.5)}"
|
| 244 |
],
|
| 245 |
"confidence": skills_result.get('confidence_score', 0.5),
|
| 246 |
+
"reasoning": f"Skills identification completed for topic '{main_topic}' with {len(skills_result.get('identified_skills', []))} relevant skills"
|
| 247 |
}
|
| 248 |
|
| 249 |
# Add intent reasoning to chain
|
| 250 |
reasoning_chain["chain_of_thought"]["step_2"] = {
|
| 251 |
+
"hypothesis": f"User intent is '{intent_result.get('primary_intent', 'unknown')}' for topic '{main_topic}'",
|
| 252 |
"evidence": [
|
| 253 |
f"Pattern analysis: {self._extract_pattern_evidence(user_input)}",
|
| 254 |
f"Confidence scores: {intent_result.get('confidence_scores', {})}",
|
|
|
|
| 256 |
f"Query complexity: {self._assess_query_complexity(user_input)}"
|
| 257 |
],
|
| 258 |
"confidence": intent_result.get('confidence_scores', {}).get(intent_result.get('primary_intent', 'unknown'), 0.7),
|
| 259 |
+
"reasoning": f"Intent '{intent_result.get('primary_intent', 'unknown')}' detected for {main_topic} based on linguistic patterns and context"
|
| 260 |
}
|
| 261 |
|
| 262 |
# Step 4: Agent execution planning with reasoning
|
|
|
|
| 265 |
|
| 266 |
# Add execution planning reasoning
|
| 267 |
reasoning_chain["chain_of_thought"]["step_3"] = {
|
| 268 |
+
"hypothesis": f"Optimal approach for '{intent_result.get('primary_intent', 'unknown')}' intent on '{main_topic}'",
|
| 269 |
"evidence": [
|
| 270 |
f"Intent complexity: {self._assess_intent_complexity(intent_result)}",
|
| 271 |
f"Required agents: {execution_plan.get('agents_to_execute', [])}",
|
|
|
|
| 273 |
f"Response scope: {self._determine_response_scope(user_input)}"
|
| 274 |
],
|
| 275 |
"confidence": 0.80,
|
| 276 |
+
"reasoning": f"Agent selection optimized for {intent_result.get('primary_intent', 'unknown')} intent regarding {main_topic}"
|
| 277 |
}
|
| 278 |
|
| 279 |
# Step 5: Parallel agent execution
|
|
|
|
| 301 |
|
| 302 |
# Add synthesis reasoning
|
| 303 |
reasoning_chain["chain_of_thought"]["step_4"] = {
|
| 304 |
+
"hypothesis": f"Response synthesis for '{main_topic}' using '{final_response.get('synthesis_method', 'unknown')}' method",
|
| 305 |
"evidence": [
|
| 306 |
f"Synthesis quality: {final_response.get('coherence_score', 0.7)}",
|
| 307 |
f"Source integration: {len(final_response.get('source_references', []))} sources",
|
|
|
|
| 309 |
f"Content relevance: {self._assess_content_relevance(user_input, final_response)}"
|
| 310 |
],
|
| 311 |
"confidence": final_response.get('coherence_score', 0.7),
|
| 312 |
+
"reasoning": f"Multi-source synthesis for {main_topic} using {final_response.get('synthesis_method', 'unknown')} approach"
|
| 313 |
}
|
| 314 |
|
| 315 |
# Step 7: Safety and bias check with reasoning
|
|
|
|
| 381 |
|
| 382 |
# Add safety reasoning
|
| 383 |
reasoning_chain["chain_of_thought"]["step_5"] = {
|
| 384 |
+
"hypothesis": f"Safety validation for response about '{main_topic}'",
|
| 385 |
"evidence": [
|
| 386 |
f"Safety score: {safety_checked.get('safety_analysis', {}).get('overall_safety_score', 0.8)}",
|
| 387 |
f"Warnings generated: {len(safety_checked.get('warnings', []))}",
|
|
|
|
| 389 |
f"Content appropriateness: {self._assess_content_appropriateness(user_input, safety_checked)}"
|
| 390 |
],
|
| 391 |
"confidence": safety_checked.get('safety_analysis', {}).get('overall_safety_score', 0.8),
|
| 392 |
+
"reasoning": f"Safety analysis for {main_topic} content with non-blocking warning system"
|
| 393 |
}
|
| 394 |
|
| 395 |
# Update final_response to use the response_content (which may have warnings appended)
|
|
|
|
| 400 |
final_response['response'] = response_content
|
| 401 |
|
| 402 |
# Generate alternative paths and uncertainty analysis
|
| 403 |
+
reasoning_chain["alternative_paths"] = self._generate_alternative_paths(intent_result, user_input, main_topic)
|
| 404 |
reasoning_chain["uncertainty_areas"] = self._identify_uncertainty_areas(intent_result, final_response, safety_checked)
|
| 405 |
reasoning_chain["evidence_sources"] = self._extract_evidence_sources(intent_result, final_response, context)
|
| 406 |
reasoning_chain["confidence_calibration"] = self._calibrate_confidence_scores(reasoning_chain)
|
|
|
|
| 454 |
system_response=response_text,
|
| 455 |
user_id=user_id
|
| 456 |
)
|
| 457 |
+
# Cache is automatically updated by generate_interaction_context()
|
| 458 |
+
|
| 459 |
+
# STEP 3: Generate Session Context after each response (100 tokens)
|
| 460 |
+
# Uses cached interaction contexts, updates database and cache
|
| 461 |
+
try:
|
| 462 |
+
await self.context_manager.generate_session_context(session_id, user_id)
|
| 463 |
+
# Cache is automatically updated by generate_session_context()
|
| 464 |
+
except Exception as e:
|
| 465 |
+
logger.error(f"Error generating session context: {e}", exc_info=True)
|
| 466 |
+
|
| 467 |
+
# Clear orchestrator-level cache to force refresh on next request
|
| 468 |
+
if hasattr(self, '_context_cache'):
|
| 469 |
+
orchestrator_cache_key = f"context_{session_id}"
|
| 470 |
+
if orchestrator_cache_key in self._context_cache:
|
| 471 |
+
del self._context_cache[orchestrator_cache_key]
|
| 472 |
+
logger.debug(f"Orchestrator cache cleared for session {session_id} to refresh with updated contexts")
|
| 473 |
except Exception as e:
|
| 474 |
logger.error(f"Error generating interaction context: {e}", exc_info=True)
|
| 475 |
|
|
|
|
| 657 |
return results
|
| 658 |
|
| 659 |
def _build_context_summary(self, context: dict) -> str:
|
| 660 |
+
"""Build a concise summary of context for task execution (all from cache)"""
|
| 661 |
summary_parts = []
|
| 662 |
|
| 663 |
+
# Extract session context (from cache)
|
| 664 |
+
session_context = context.get('session_context', {})
|
| 665 |
+
session_summary = session_context.get('summary', '') if isinstance(session_context, dict) else ""
|
| 666 |
+
if session_summary:
|
| 667 |
+
summary_parts.append(f"Session summary: {session_summary[:1500]}")
|
| 668 |
+
|
| 669 |
+
# Extract interaction contexts (from cache)
|
| 670 |
interaction_contexts = context.get('interaction_contexts', [])
|
| 671 |
if interaction_contexts:
|
| 672 |
recent_summaries = [ic.get('summary', '') for ic in interaction_contexts[-3:]]
|
| 673 |
if recent_summaries:
|
| 674 |
summary_parts.append(f"Recent conversation topics: {', '.join(recent_summaries)}")
|
| 675 |
|
| 676 |
+
# Extract user context (from cache)
|
| 677 |
user_context = context.get('user_context', '')
|
| 678 |
if user_context:
|
| 679 |
summary_parts.append(f"User background: {user_context[:200]}")
|
|
|
|
| 1031 |
else:
|
| 1032 |
return "Long session (> 20 interactions)"
|
| 1033 |
|
| 1034 |
+
async def _analyze_topic_continuity(self, context: dict, user_input: str) -> str:
|
| 1035 |
+
"""Analyze topic continuity using LLM zero-shot classification (uses session context and interaction contexts from cache)"""
|
| 1036 |
+
try:
|
| 1037 |
+
# Check session context first (from cache)
|
| 1038 |
+
session_context = context.get('session_context', {})
|
| 1039 |
+
session_summary = session_context.get('summary', '') if isinstance(session_context, dict) else ""
|
| 1040 |
+
|
| 1041 |
+
interaction_contexts = context.get('interaction_contexts', [])
|
| 1042 |
+
if not interaction_contexts and not session_summary:
|
| 1043 |
+
return "No previous context"
|
| 1044 |
+
|
| 1045 |
+
# Build context summary from cache
|
| 1046 |
+
recent_interactions_summary = "\n".join([
|
| 1047 |
+
f"- {ic.get('summary', '')}"
|
| 1048 |
+
for ic in interaction_contexts[:3]
|
| 1049 |
+
if ic.get('summary')
|
| 1050 |
+
])
|
| 1051 |
+
|
| 1052 |
+
# Use LLM for context-aware topic continuity analysis
|
| 1053 |
+
if self.llm_router:
|
| 1054 |
+
prompt = f"""Determine if the current query continues the previous conversation topic or introduces a new topic.
|
| 1055 |
+
|
| 1056 |
+
Session Summary: {session_summary[:300] if session_summary else 'No session summary available'}
|
| 1057 |
+
|
| 1058 |
+
Recent Interactions:
|
| 1059 |
+
{recent_interactions_summary if recent_interactions_summary else 'No recent interactions'}
|
| 1060 |
+
|
| 1061 |
+
Current Query: "{user_input}"
|
| 1062 |
+
|
| 1063 |
+
Analyze whether the current query:
|
| 1064 |
+
1. Continues the same topic from previous interactions
|
| 1065 |
+
2. Introduces a new topic
|
| 1066 |
+
|
| 1067 |
+
Respond with EXACTLY one of these formats:
|
| 1068 |
+
- "Continuing [topic name] discussion" if same topic
|
| 1069 |
+
- "New topic: [topic name]" if different topic
|
| 1070 |
+
|
| 1071 |
+
Keep topic name to 2-5 words. Example responses:
|
| 1072 |
+
- "Continuing machine learning discussion"
|
| 1073 |
+
- "New topic: financial analysis"
|
| 1074 |
+
- "Continuing software development discussion"
|
| 1075 |
+
"""
|
| 1076 |
+
|
| 1077 |
+
continuity_result = await self.llm_router.route_inference(
|
| 1078 |
+
task_type="general_reasoning",
|
| 1079 |
+
prompt=prompt,
|
| 1080 |
+
max_tokens=50,
|
| 1081 |
+
temperature=0.3 # Lower temperature for consistency
|
| 1082 |
+
)
|
| 1083 |
+
|
| 1084 |
+
if continuity_result and isinstance(continuity_result, str) and continuity_result.strip():
|
| 1085 |
+
result = continuity_result.strip()
|
| 1086 |
+
# Validate format
|
| 1087 |
+
if "Continuing" in result or "New topic:" in result:
|
| 1088 |
+
logger.debug(f"Topic continuity analysis: {result}")
|
| 1089 |
+
return result
|
| 1090 |
+
|
| 1091 |
+
# Fallback to simple check if LLM unavailable
|
| 1092 |
+
if not session_summary and not recent_interactions_summary:
|
| 1093 |
+
return "No previous context"
|
| 1094 |
+
return "Topic continuity analysis unavailable"
|
| 1095 |
+
|
| 1096 |
+
except Exception as e:
|
| 1097 |
+
logger.error(f"Error in LLM-based topic continuity analysis: {e}", exc_info=True)
|
| 1098 |
+
# Fallback
|
| 1099 |
+
return "Topic continuity analysis failed"
|
| 1100 |
|
| 1101 |
def _extract_pattern_evidence(self, user_input: str) -> str:
|
| 1102 |
"""Extract pattern evidence for intent reasoning"""
|
|
|
|
| 1133 |
else:
|
| 1134 |
return "Complex, multi-faceted intent"
|
| 1135 |
|
| 1136 |
+
def _generate_alternative_paths(self, intent_result: dict, user_input: str, main_topic: str) -> list:
|
| 1137 |
"""Generate alternative reasoning paths based on actual content"""
|
| 1138 |
primary_intent = intent_result.get('primary_intent', 'unknown')
|
| 1139 |
secondary_intents = intent_result.get('secondary_intents', [])
|
|
|
|
| 1140 |
|
| 1141 |
alternative_paths = []
|
| 1142 |
|
|
|
|
| 1277 |
"calibration_method": "Weighted average of step confidences"
|
| 1278 |
}
|
| 1279 |
|
| 1280 |
+
async def _extract_main_topic(self, user_input: str, context: dict = None) -> str:
|
| 1281 |
+
"""Extract the main topic using LLM zero-shot classification with caching"""
|
| 1282 |
+
try:
|
| 1283 |
+
# Check cache first
|
| 1284 |
+
import hashlib
|
| 1285 |
+
cache_key = hashlib.md5(user_input.encode()).hexdigest()
|
| 1286 |
+
if cache_key in self._topic_cache:
|
| 1287 |
+
logger.debug(f"Topic cache hit for: {user_input[:50]}...")
|
| 1288 |
+
return self._topic_cache[cache_key]
|
| 1289 |
+
|
| 1290 |
+
# Use LLM for accurate topic extraction
|
| 1291 |
+
if self.llm_router:
|
| 1292 |
+
# Build context summary if available
|
| 1293 |
+
context_info = ""
|
| 1294 |
+
if context:
|
| 1295 |
+
session_context = context.get('session_context', {})
|
| 1296 |
+
session_summary = session_context.get('summary', '') if isinstance(session_context, dict) else ""
|
| 1297 |
+
interaction_count = len(context.get('interaction_contexts', []))
|
| 1298 |
+
|
| 1299 |
+
if session_summary:
|
| 1300 |
+
context_info = f"\n\nSession context: {session_summary[:200]}"
|
| 1301 |
+
if interaction_count > 0:
|
| 1302 |
+
context_info += f"\nPrevious interactions in session: {interaction_count}"
|
| 1303 |
+
|
| 1304 |
+
prompt = f"""Classify the main topic of this query in 2-5 words. Be specific and concise.
|
| 1305 |
+
|
| 1306 |
+
Query: "{user_input}"{context_info}
|
| 1307 |
+
|
| 1308 |
+
Respond with ONLY the topic name (e.g., "Machine Learning", "Healthcare Analytics", "Financial Modeling", "Software Development", "Educational Curriculum").
|
| 1309 |
+
|
| 1310 |
+
Do not include explanations, just the topic name. Maximum 5 words."""
|
| 1311 |
+
|
| 1312 |
+
topic_result = await self.llm_router.route_inference(
|
| 1313 |
+
task_type="classification",
|
| 1314 |
+
prompt=prompt,
|
| 1315 |
+
max_tokens=20,
|
| 1316 |
+
temperature=0.3 # Lower temperature for consistency
|
| 1317 |
+
)
|
| 1318 |
+
|
| 1319 |
+
if topic_result and isinstance(topic_result, str) and topic_result.strip():
|
| 1320 |
+
topic = topic_result.strip()
|
| 1321 |
+
# Clean up any extra text (LLM might add explanations)
|
| 1322 |
+
# Take first line and first 5 words max
|
| 1323 |
+
topic = topic.split('\n')[0].strip()
|
| 1324 |
+
words = topic.split()[:5]
|
| 1325 |
+
topic = " ".join(words)
|
| 1326 |
+
|
| 1327 |
+
# Cache the result
|
| 1328 |
+
if len(self._topic_cache) >= self._topic_cache_max_size:
|
| 1329 |
+
# Remove oldest entry (simple FIFO)
|
| 1330 |
+
oldest_key = next(iter(self._topic_cache))
|
| 1331 |
+
del self._topic_cache[oldest_key]
|
| 1332 |
+
|
| 1333 |
+
self._topic_cache[cache_key] = topic
|
| 1334 |
+
logger.debug(f"Topic extracted: {topic}")
|
| 1335 |
+
return topic
|
| 1336 |
+
|
| 1337 |
+
# Fallback to simple extraction if LLM unavailable
|
| 1338 |
+
words = user_input.split()[:4]
|
| 1339 |
+
fallback_topic = " ".join(words) if words else "General inquiry"
|
| 1340 |
+
logger.warning(f"Using fallback topic extraction: {fallback_topic}")
|
| 1341 |
+
return fallback_topic
|
| 1342 |
+
|
| 1343 |
+
except Exception as e:
|
| 1344 |
+
logger.error(f"Error in LLM-based topic extraction: {e}", exc_info=True)
|
| 1345 |
+
# Fallback
|
| 1346 |
words = user_input.split()[:4]
|
| 1347 |
return " ".join(words) if words else "General inquiry"
|
| 1348 |
|
| 1349 |
+
async def _extract_keywords(self, user_input: str) -> str:
|
| 1350 |
+
"""Extract key terms using LLM or simple extraction"""
|
| 1351 |
+
try:
|
| 1352 |
+
# Simple extraction for performance (keywords less critical than topic)
|
| 1353 |
+
# Can be enhanced with LLM if needed
|
| 1354 |
+
import re
|
| 1355 |
+
# Extract meaningful words (3+ characters, not common stop words)
|
| 1356 |
+
stop_words = {'the', 'and', 'for', 'are', 'but', 'not', 'you', 'all', 'can', 'her', 'was', 'one', 'our', 'out', 'day', 'get', 'has', 'him', 'his', 'how', 'its', 'may', 'new', 'now', 'old', 'see', 'two', 'way', 'who', 'boy', 'did', 'she', 'use', 'her', 'many', 'some', 'time', 'very', 'when', 'come', 'here', 'just', 'like', 'long', 'make', 'over', 'such', 'take', 'than', 'them', 'well', 'were'}
|
| 1357 |
+
|
| 1358 |
+
words = re.findall(r'\b[a-zA-Z]{3,}\b', user_input.lower())
|
| 1359 |
+
keywords = [w for w in words if w not in stop_words][:5]
|
| 1360 |
+
|
| 1361 |
+
return ", ".join(keywords) if keywords else "General terms"
|
| 1362 |
+
|
| 1363 |
+
except Exception as e:
|
| 1364 |
+
logger.error(f"Error in keyword extraction: {e}", exc_info=True)
|
| 1365 |
+
return "General terms"
|
|
|
|
|
|
|
| 1366 |
|
| 1367 |
def _assess_query_complexity(self, user_input: str) -> str:
|
| 1368 |
"""Assess the complexity of the user query"""
|
|
|
|
| 1728 |
"complex_refinement": "add clarifying details to your existing question"
|
| 1729 |
})
|
| 1730 |
|
| 1731 |
+
# Topic extraction removed from error recovery to avoid async complexity
|
| 1732 |
+
# Error recovery uses simplified context
|
| 1733 |
+
topic = "Error recovery context"
|
| 1734 |
|
| 1735 |
# Adaptive guidance based on input complexity
|
| 1736 |
if input_complexity["is_complex"]:
|