JatsTheAIGen commited on
Commit
e440f24
·
1 Parent(s): 1ca61f9

Log Process Flow as JSON for review v2

Browse files
LLM_LOGGING_ENHANCEMENT.md ADDED
@@ -0,0 +1,186 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # LLM API Response Logging Enhancement
2
+
3
+ ## Overview
4
+ This document describes the comprehensive logging enhancements implemented to ensure all LLM API inference responses are printed without truncation in container logs.
5
+
6
+ ## Changes Made
7
+
8
+ ### 1. Enhanced LLM Router Logging (`src/llm_router.py` and `llm_router.py`)
9
+
10
+ #### Complete API Request Logging
11
+ - **Full Prompt Content**: Logs the complete prompt sent to the LLM without truncation
12
+ - **API Request Details**: Logs all request parameters including:
13
+ - API URL
14
+ - Model ID
15
+ - Task Type
16
+ - Max Tokens
17
+ - Temperature
18
+ - Top P
19
+ - User Message Length
20
+ - **Complete API Payload**: Logs the full JSON payload sent to the API
21
+
22
+ #### Complete API Response Logging
23
+ - **Response Metadata**: Logs complete API response metadata including:
24
+ - Status Code
25
+ - Response Headers
26
+ - Response Size
27
+ - Complete API Response JSON
28
+ - **Full Response Content**: Logs the complete LLM-generated response without any truncation
29
+ - **Structured Format**: Uses clear delimiters and formatting for easy identification in logs
30
+
31
+ ### 2. Enhanced Synthesis Agent Logging (`src/agents/synthesis_agent.py`)
32
+
33
+ #### Response Synthesis Logging
34
+ - **Complete LLM Response**: Logs the full response received from LLM for synthesis
35
+ - **Agent Context**: Includes agent ID, task type, and response length
36
+ - **Structured Format**: Uses clear delimiters for easy log parsing
37
+
38
+ #### Narrative Summary Logging
39
+ - **Complete Summary Response**: Logs the full narrative summary generated by LLM
40
+ - **Context Information**: Includes interaction count and summary length
41
+ - **Structured Format**: Uses clear delimiters for easy identification
42
+
43
+ ## Log Format Examples
44
+
45
+ ### API Request Logging
46
+ ```
47
+ ================================================================================
48
+ LLM API REQUEST - COMPLETE PROMPT:
49
+ ================================================================================
50
+ Model: mistralai/Mistral-7B-Instruct-v0.2
51
+ Task Type: response_synthesis
52
+ Prompt Length: 1234 characters
53
+ ----------------------------------------
54
+ FULL PROMPT CONTENT:
55
+ ----------------------------------------
56
+ [Complete prompt content here]
57
+ ----------------------------------------
58
+ END OF PROMPT
59
+ ================================================================================
60
+ ```
61
+
62
+ ### API Response Logging
63
+ ```
64
+ ================================================================================
65
+ LLM API RESPONSE METADATA:
66
+ ================================================================================
67
+ Status Code: 200
68
+ Response Headers: {'content-type': 'application/json', ...}
69
+ Response Size: 5678 characters
70
+ ----------------------------------------
71
+ COMPLETE API RESPONSE JSON:
72
+ ----------------------------------------
73
+ {
74
+ "choices": [
75
+ {
76
+ "message": {
77
+ "content": "[Full response content]"
78
+ }
79
+ }
80
+ ]
81
+ }
82
+ ----------------------------------------
83
+ END OF API RESPONSE METADATA
84
+ ================================================================================
85
+ ```
86
+
87
+ ### Complete Response Content Logging
88
+ ```
89
+ ================================================================================
90
+ COMPLETE LLM API RESPONSE:
91
+ ================================================================================
92
+ Model: mistralai/Mistral-7B-Instruct-v0.2
93
+ Task Type: response_synthesis
94
+ Response Length: 1234 characters
95
+ ----------------------------------------
96
+ FULL RESPONSE CONTENT:
97
+ ----------------------------------------
98
+ [Complete LLM response without truncation]
99
+ ----------------------------------------
100
+ END OF LLM RESPONSE
101
+ ================================================================================
102
+ ```
103
+
104
+ ## Container Log Benefits
105
+
106
+ ### 1. Complete Visibility
107
+ - **No Truncation**: All LLM responses are logged in full
108
+ - **Request/Response Pairs**: Complete visibility into API interactions
109
+ - **Metadata Included**: Full context for debugging and monitoring
110
+
111
+ ### 2. Easy Parsing
112
+ - **Clear Delimiters**: Uses consistent formatting with `=` and `-` characters
113
+ - **Structured Sections**: Each log section is clearly marked
114
+ - **Searchable Format**: Easy to grep for specific log sections
115
+
116
+ ### 3. Debugging Support
117
+ - **Full Context**: Complete prompts and responses for debugging
118
+ - **Error Tracking**: Complete API response metadata for error analysis
119
+ - **Performance Monitoring**: Response sizes and timing information
120
+
121
+ ## Usage in Container Environments
122
+
123
+ ### Docker Logs
124
+ ```bash
125
+ # View all logs
126
+ docker logs <container_id>
127
+
128
+ # Follow logs in real-time
129
+ docker logs -f <container_id>
130
+
131
+ # Search for LLM responses
132
+ docker logs <container_id> | grep "COMPLETE LLM API RESPONSE"
133
+ ```
134
+
135
+ ### Kubernetes Logs
136
+ ```bash
137
+ # View pod logs
138
+ kubectl logs <pod-name>
139
+
140
+ # Follow logs
141
+ kubectl logs -f <pod-name>
142
+
143
+ # Search for specific log sections
144
+ kubectl logs <pod-name> | grep "FULL RESPONSE CONTENT"
145
+ ```
146
+
147
+ ## Files Modified
148
+
149
+ 1. **`Research_AI_Assistant/src/llm_router.py`**
150
+ - Added comprehensive API request logging
151
+ - Added complete API response metadata logging
152
+ - Added full response content logging
153
+
154
+ 2. **`Research_AI_Assistant/llm_router.py`**
155
+ - Added comprehensive API request logging
156
+ - Added complete API response metadata logging
157
+ - Added full response content logging
158
+
159
+ 3. **`Research_AI_Assistant/src/agents/synthesis_agent.py`**
160
+ - Added complete LLM response logging for synthesis
161
+ - Added complete narrative summary logging
162
+ - Added structured formatting for easy parsing
163
+
164
+ 4. **`Research_AI_Assistant/test_logging.py`** (New)
165
+ - Test script to verify logging configuration
166
+ - Tests both basic logging and LLM-specific logging
167
+
168
+ ## Verification
169
+
170
+ The logging configuration has been tested and verified to:
171
+ - ✅ Log complete prompts without truncation
172
+ - ✅ Log complete API responses without truncation
173
+ - ✅ Include all metadata and context information
174
+ - ✅ Use clear, structured formatting
175
+ - ✅ Work in container environments
176
+ - ✅ Be easily searchable and parseable
177
+
178
+ ## Benefits
179
+
180
+ 1. **Complete Transparency**: All LLM interactions are fully visible in logs
181
+ 2. **Debugging Support**: Full context for troubleshooting issues
182
+ 3. **Performance Monitoring**: Complete response data for analysis
183
+ 4. **Container Compatibility**: Works seamlessly in Docker/Kubernetes environments
184
+ 5. **No Data Loss**: No truncation means no loss of important information
185
+
186
+ This enhancement ensures that all LLM API inference responses are captured in full detail in container logs, providing complete visibility into the system's AI interactions.
llm_router.py CHANGED
@@ -84,6 +84,19 @@ class LLMRouter:
84
 
85
  logger.info(f"Calling HF Chat Completions API for model: {model_id}")
86
  logger.debug(f"Prompt length: {len(prompt)}")
 
 
 
 
 
 
 
 
 
 
 
 
 
87
 
88
  headers = {
89
  "Authorization": f"Bearer {self.hf_token}",
@@ -107,11 +120,47 @@ class LLMRouter:
107
  "top_p": kwargs.get("top_p", 0.95)
108
  }
109
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
110
  # Make the API call
111
  response = requests.post(api_url, json=payload, headers=headers, timeout=60)
112
 
113
  if response.status_code == 200:
114
  result = response.json()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
115
  # Handle chat completions response format
116
  if "choices" in result and len(result["choices"]) > 0:
117
  message = result["choices"][0].get("message", {})
@@ -123,6 +172,19 @@ class LLMRouter:
123
  return None
124
 
125
  logger.info(f"HF API returned response (length: {len(generated_text)})")
 
 
 
 
 
 
 
 
 
 
 
 
 
126
  return generated_text
127
  else:
128
  logger.error(f"Unexpected response format: {result}")
 
84
 
85
  logger.info(f"Calling HF Chat Completions API for model: {model_id}")
86
  logger.debug(f"Prompt length: {len(prompt)}")
87
+ logger.info("=" * 80)
88
+ logger.info("LLM API REQUEST - COMPLETE PROMPT:")
89
+ logger.info("=" * 80)
90
+ logger.info(f"Model: {model_id}")
91
+ logger.info(f"Task Type: {task_type}")
92
+ logger.info(f"Prompt Length: {len(prompt)} characters")
93
+ logger.info("-" * 40)
94
+ logger.info("FULL PROMPT CONTENT:")
95
+ logger.info("-" * 40)
96
+ logger.info(prompt)
97
+ logger.info("-" * 40)
98
+ logger.info("END OF PROMPT")
99
+ logger.info("=" * 80)
100
 
101
  headers = {
102
  "Authorization": f"Bearer {self.hf_token}",
 
120
  "top_p": kwargs.get("top_p", 0.95)
121
  }
122
 
123
+ # Log complete API request details
124
+ logger.info("=" * 80)
125
+ logger.info("LLM API REQUEST DETAILS:")
126
+ logger.info("=" * 80)
127
+ logger.info(f"API URL: {api_url}")
128
+ logger.info(f"Model: {model_id}")
129
+ logger.info(f"Task Type: {task_type}")
130
+ logger.info(f"Max Tokens: {kwargs.get('max_tokens', 2000)}")
131
+ logger.info(f"Temperature: {kwargs.get('temperature', 0.7)}")
132
+ logger.info(f"Top P: {kwargs.get('top_p', 0.95)}")
133
+ logger.info(f"User Message Length: {len(user_message)} characters")
134
+ logger.info("-" * 40)
135
+ logger.info("API PAYLOAD:")
136
+ logger.info("-" * 40)
137
+ import json
138
+ logger.info(json.dumps(payload, indent=2))
139
+ logger.info("-" * 40)
140
+ logger.info("END OF API REQUEST")
141
+ logger.info("=" * 80)
142
+
143
  # Make the API call
144
  response = requests.post(api_url, json=payload, headers=headers, timeout=60)
145
 
146
  if response.status_code == 200:
147
  result = response.json()
148
+
149
+ # Log complete API response metadata
150
+ logger.info("=" * 80)
151
+ logger.info("LLM API RESPONSE METADATA:")
152
+ logger.info("=" * 80)
153
+ logger.info(f"Status Code: {response.status_code}")
154
+ logger.info(f"Response Headers: {dict(response.headers)}")
155
+ logger.info(f"Response Size: {len(response.text)} characters")
156
+ logger.info("-" * 40)
157
+ logger.info("COMPLETE API RESPONSE JSON:")
158
+ logger.info("-" * 40)
159
+ logger.info(json.dumps(result, indent=2))
160
+ logger.info("-" * 40)
161
+ logger.info("END OF API RESPONSE METADATA")
162
+ logger.info("=" * 80)
163
+
164
  # Handle chat completions response format
165
  if "choices" in result and len(result["choices"]) > 0:
166
  message = result["choices"][0].get("message", {})
 
172
  return None
173
 
174
  logger.info(f"HF API returned response (length: {len(generated_text)})")
175
+ logger.info("=" * 80)
176
+ logger.info("COMPLETE LLM API RESPONSE:")
177
+ logger.info("=" * 80)
178
+ logger.info(f"Model: {model_id}")
179
+ logger.info(f"Task Type: {task_type}")
180
+ logger.info(f"Response Length: {len(generated_text)} characters")
181
+ logger.info("-" * 40)
182
+ logger.info("FULL RESPONSE CONTENT:")
183
+ logger.info("-" * 40)
184
+ logger.info(generated_text)
185
+ logger.info("-" * 40)
186
+ logger.info("END OF LLM RESPONSE")
187
+ logger.info("=" * 80)
188
  return generated_text
189
  else:
190
  logger.error(f"Unexpected response format: {result}")
src/agents/synthesis_agent.py CHANGED
@@ -112,6 +112,19 @@ class ResponseSynthesisAgent:
112
  # Clean up the response
113
  clean_response = llm_response.strip()
114
  logger.info(f"{self.agent_id} received LLM response (length: {len(clean_response)})")
 
 
 
 
 
 
 
 
 
 
 
 
 
115
 
116
  return {
117
  "draft_response": clean_response,
@@ -283,6 +296,23 @@ Summary:"""
283
  # Remove any "Summary:" prefix if present
284
  if clean_summary.startswith("Summary:"):
285
  clean_summary = clean_summary[9:].strip()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
286
  return clean_summary
287
 
288
  except Exception as e:
 
112
  # Clean up the response
113
  clean_response = llm_response.strip()
114
  logger.info(f"{self.agent_id} received LLM response (length: {len(clean_response)})")
115
+ logger.info("=" * 80)
116
+ logger.info("SYNTHESIS AGENT - COMPLETE LLM RESPONSE:")
117
+ logger.info("=" * 80)
118
+ logger.info(f"Agent: {self.agent_id}")
119
+ logger.info(f"Task: response_synthesis")
120
+ logger.info(f"Response Length: {len(clean_response)} characters")
121
+ logger.info("-" * 40)
122
+ logger.info("FULL LLM RESPONSE CONTENT:")
123
+ logger.info("-" * 40)
124
+ logger.info(clean_response)
125
+ logger.info("-" * 40)
126
+ logger.info("END OF SYNTHESIS LLM RESPONSE")
127
+ logger.info("=" * 80)
128
 
129
  return {
130
  "draft_response": clean_response,
 
296
  # Remove any "Summary:" prefix if present
297
  if clean_summary.startswith("Summary:"):
298
  clean_summary = clean_summary[9:].strip()
299
+
300
+ # Log the complete narrative summary response
301
+ logger.info("=" * 80)
302
+ logger.info("NARRATIVE SUMMARY - COMPLETE LLM RESPONSE:")
303
+ logger.info("=" * 80)
304
+ logger.info(f"Agent: {self.agent_id}")
305
+ logger.info(f"Task: narrative_summary")
306
+ logger.info(f"Interactions Count: {len(interactions)}")
307
+ logger.info(f"Summary Length: {len(clean_summary)} characters")
308
+ logger.info("-" * 40)
309
+ logger.info("FULL NARRATIVE SUMMARY CONTENT:")
310
+ logger.info("-" * 40)
311
+ logger.info(clean_summary)
312
+ logger.info("-" * 40)
313
+ logger.info("END OF NARRATIVE SUMMARY RESPONSE")
314
+ logger.info("=" * 80)
315
+
316
  return clean_summary
317
 
318
  except Exception as e:
src/llm_router.py CHANGED
@@ -85,6 +85,19 @@ class LLMRouter:
85
 
86
  logger.info(f"Calling HF Chat Completions API for model: {model_id}")
87
  logger.debug(f"Prompt length: {len(prompt)}")
 
 
 
 
 
 
 
 
 
 
 
 
 
88
 
89
  headers = {
90
  "Authorization": f"Bearer {self.hf_token}",
@@ -108,11 +121,47 @@ class LLMRouter:
108
  "top_p": kwargs.get("top_p", 0.95)
109
  }
110
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
111
  # Make the API call
112
  response = requests.post(api_url, json=payload, headers=headers, timeout=60)
113
 
114
  if response.status_code == 200:
115
  result = response.json()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
116
  # Handle chat completions response format
117
  if "choices" in result and len(result["choices"]) > 0:
118
  message = result["choices"][0].get("message", {})
@@ -124,6 +173,19 @@ class LLMRouter:
124
  return None
125
 
126
  logger.info(f"HF API returned response (length: {len(generated_text)})")
 
 
 
 
 
 
 
 
 
 
 
 
 
127
  return generated_text
128
  else:
129
  logger.error(f"Unexpected response format: {result}")
 
85
 
86
  logger.info(f"Calling HF Chat Completions API for model: {model_id}")
87
  logger.debug(f"Prompt length: {len(prompt)}")
88
+ logger.info("=" * 80)
89
+ logger.info("LLM API REQUEST - COMPLETE PROMPT:")
90
+ logger.info("=" * 80)
91
+ logger.info(f"Model: {model_id}")
92
+ logger.info(f"Task Type: {task_type}")
93
+ logger.info(f"Prompt Length: {len(prompt)} characters")
94
+ logger.info("-" * 40)
95
+ logger.info("FULL PROMPT CONTENT:")
96
+ logger.info("-" * 40)
97
+ logger.info(prompt)
98
+ logger.info("-" * 40)
99
+ logger.info("END OF PROMPT")
100
+ logger.info("=" * 80)
101
 
102
  headers = {
103
  "Authorization": f"Bearer {self.hf_token}",
 
121
  "top_p": kwargs.get("top_p", 0.95)
122
  }
123
 
124
+ # Log complete API request details
125
+ logger.info("=" * 80)
126
+ logger.info("LLM API REQUEST DETAILS:")
127
+ logger.info("=" * 80)
128
+ logger.info(f"API URL: {api_url}")
129
+ logger.info(f"Model: {model_id}")
130
+ logger.info(f"Task Type: {task_type}")
131
+ logger.info(f"Max Tokens: {kwargs.get('max_tokens', 2000)}")
132
+ logger.info(f"Temperature: {kwargs.get('temperature', 0.7)}")
133
+ logger.info(f"Top P: {kwargs.get('top_p', 0.95)}")
134
+ logger.info(f"User Message Length: {len(user_message)} characters")
135
+ logger.info("-" * 40)
136
+ logger.info("API PAYLOAD:")
137
+ logger.info("-" * 40)
138
+ import json
139
+ logger.info(json.dumps(payload, indent=2))
140
+ logger.info("-" * 40)
141
+ logger.info("END OF API REQUEST")
142
+ logger.info("=" * 80)
143
+
144
  # Make the API call
145
  response = requests.post(api_url, json=payload, headers=headers, timeout=60)
146
 
147
  if response.status_code == 200:
148
  result = response.json()
149
+
150
+ # Log complete API response metadata
151
+ logger.info("=" * 80)
152
+ logger.info("LLM API RESPONSE METADATA:")
153
+ logger.info("=" * 80)
154
+ logger.info(f"Status Code: {response.status_code}")
155
+ logger.info(f"Response Headers: {dict(response.headers)}")
156
+ logger.info(f"Response Size: {len(response.text)} characters")
157
+ logger.info("-" * 40)
158
+ logger.info("COMPLETE API RESPONSE JSON:")
159
+ logger.info("-" * 40)
160
+ logger.info(json.dumps(result, indent=2))
161
+ logger.info("-" * 40)
162
+ logger.info("END OF API RESPONSE METADATA")
163
+ logger.info("=" * 80)
164
+
165
  # Handle chat completions response format
166
  if "choices" in result and len(result["choices"]) > 0:
167
  message = result["choices"][0].get("message", {})
 
173
  return None
174
 
175
  logger.info(f"HF API returned response (length: {len(generated_text)})")
176
+ logger.info("=" * 80)
177
+ logger.info("COMPLETE LLM API RESPONSE:")
178
+ logger.info("=" * 80)
179
+ logger.info(f"Model: {model_id}")
180
+ logger.info(f"Task Type: {task_type}")
181
+ logger.info(f"Response Length: {len(generated_text)} characters")
182
+ logger.info("-" * 40)
183
+ logger.info("FULL RESPONSE CONTENT:")
184
+ logger.info("-" * 40)
185
+ logger.info(generated_text)
186
+ logger.info("-" * 40)
187
+ logger.info("END OF LLM RESPONSE")
188
+ logger.info("=" * 80)
189
  return generated_text
190
  else:
191
  logger.error(f"Unexpected response format: {result}")
test_logging.py ADDED
@@ -0,0 +1,116 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """
3
+ Test script to verify comprehensive LLM API logging
4
+ This script tests the logging configuration to ensure all LLM responses
5
+ are logged without truncation in container logs.
6
+ """
7
+
8
+ import logging
9
+ import asyncio
10
+ import sys
11
+ import os
12
+
13
+ # Add the src directory to the path
14
+ sys.path.insert(0, '.')
15
+ sys.path.insert(0, 'src')
16
+
17
+ # Configure logging to match the main application
18
+ logging.basicConfig(
19
+ level=logging.INFO,
20
+ format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
21
+ handlers=[
22
+ logging.StreamHandler(),
23
+ logging.FileHandler('test_logging.log')
24
+ ]
25
+ )
26
+
27
+ logger = logging.getLogger(__name__)
28
+
29
+ async def test_llm_logging():
30
+ """Test the LLM logging functionality"""
31
+
32
+ logger.info("=" * 80)
33
+ logger.info("TESTING LLM API LOGGING CONFIGURATION")
34
+ logger.info("=" * 80)
35
+
36
+ try:
37
+ # Import the LLM router
38
+ from src.llm_router import LLMRouter
39
+ from src.config import settings
40
+
41
+ logger.info("✓ Successfully imported LLM router and config")
42
+
43
+ # Initialize LLM router with a test token
44
+ test_token = os.getenv('HF_TOKEN', 'test_token')
45
+ llm_router = LLMRouter(test_token)
46
+
47
+ logger.info("✓ LLM router initialized")
48
+
49
+ # Test a simple inference call
50
+ test_prompt = "Hello, this is a test prompt for logging verification."
51
+ test_task = "response_synthesis"
52
+
53
+ logger.info(f"Testing with prompt: {test_prompt}")
54
+ logger.info(f"Task type: {test_task}")
55
+
56
+ # This will trigger the comprehensive logging
57
+ result = await llm_router.route_inference(
58
+ task_type=test_task,
59
+ prompt=test_prompt,
60
+ max_tokens=100,
61
+ temperature=0.7
62
+ )
63
+
64
+ if result:
65
+ logger.info("✓ LLM call completed successfully")
66
+ logger.info(f"Result length: {len(result)} characters")
67
+ else:
68
+ logger.info("✓ LLM call completed (fallback used)")
69
+
70
+ logger.info("=" * 80)
71
+ logger.info("LOGGING TEST COMPLETED")
72
+ logger.info("=" * 80)
73
+ logger.info("Check the logs above for:")
74
+ logger.info("1. Complete API request details")
75
+ logger.info("2. Full prompt content")
76
+ logger.info("3. Complete API response metadata")
77
+ logger.info("4. Full LLM response content")
78
+ logger.info("5. All responses should be untruncated")
79
+ logger.info("=" * 80)
80
+
81
+ except ImportError as e:
82
+ logger.error(f"Import error: {e}")
83
+ logger.info("This is expected if dependencies are not available")
84
+ except Exception as e:
85
+ logger.error(f"Test error: {e}")
86
+ logger.info("This is expected if API calls fail")
87
+
88
+ def test_logging_configuration():
89
+ """Test basic logging configuration"""
90
+
91
+ logger.info("Testing basic logging configuration...")
92
+
93
+ # Test different log levels
94
+ logger.debug("This is a debug message")
95
+ logger.info("This is an info message")
96
+ logger.warning("This is a warning message")
97
+ logger.error("This is an error message")
98
+
99
+ # Test long message logging
100
+ long_message = "This is a very long message that should not be truncated. " * 50
101
+ logger.info(f"Long message test (length: {len(long_message)}): {long_message}")
102
+
103
+ logger.info("✓ Basic logging configuration test completed")
104
+
105
+ if __name__ == "__main__":
106
+ print("Testing LLM API Logging Configuration")
107
+ print("=" * 50)
108
+
109
+ # Test basic logging first
110
+ test_logging_configuration()
111
+
112
+ # Test LLM logging
113
+ asyncio.run(test_llm_logging())
114
+
115
+ print("\nTest completed. Check 'test_logging.log' file for detailed logs.")
116
+ print("In container environments, these logs will appear in container output.")