pipeline works from pi simulation to control output and strategy generation.

This commit is contained in:
Aditya Pulipaka
2025-10-19 03:57:03 -05:00
parent 9f70ba7221
commit 636ddf27d4
42 changed files with 1297 additions and 4472 deletions

View File

@@ -1,230 +0,0 @@
# Summary of Changes
## Task 1: Auto-Triggering Strategy Brainstorming
### Problem
The AI Intelligence Layer required manual API calls to `/api/strategy/brainstorm` endpoint. The webhook endpoint only received enriched telemetry without race context.
### Solution
Modified `/api/ingest/enriched` endpoint to:
1. Accept both enriched telemetry AND race context
2. Automatically trigger strategy brainstorming when buffer has ≥3 laps
3. Return generated strategies in the webhook response
### Files Changed
- `ai_intelligence_layer/models/input_models.py`: Added `EnrichedTelemetryWithContext` model
- `ai_intelligence_layer/main.py`: Updated webhook endpoint to auto-trigger brainstorm
### Key Code Changes
**New Input Model:**
```python
class EnrichedTelemetryWithContext(BaseModel):
enriched_telemetry: EnrichedTelemetryWebhook
race_context: RaceContext
```
**Updated Endpoint Logic:**
```python
@app.post("/api/ingest/enriched")
async def ingest_enriched_telemetry(data: EnrichedTelemetryWithContext):
# Store telemetry and race context
telemetry_buffer.add(data.enriched_telemetry)
current_race_context = data.race_context
# Auto-trigger brainstorm when we have enough data
if buffer_data and len(buffer_data) >= 3:
response = await strategy_generator.generate(
enriched_telemetry=buffer_data,
race_context=data.race_context
)
return {
"status": "received_and_processed",
"strategies": [s.model_dump() for s in response.strategies]
}
```
---
## Task 2: Enrichment Stage Outputs Complete Race Context
### Problem
The enrichment service only output 7 enriched telemetry fields. The AI Intelligence Layer needed complete race context including race_info, driver_state, and competitors.
### Solution
Extended enrichment to build and output complete race context alongside enriched telemetry metrics.
### Files Changed
- `hpcsim/enrichment.py`: Added `enrich_with_context()` method and race context building
- `hpcsim/adapter.py`: Extended normalization for race context fields
- `hpcsim/api.py`: Updated endpoint to use new enrichment method
- `scripts/simulate_pi_stream.py`: Added race context fields to telemetry
- `scripts/enrich_telemetry.py`: Added `--full-context` flag
### Key Code Changes
**New Enricher Method:**
```python
def enrich_with_context(self, telemetry: Dict[str, Any]) -> Dict[str, Any]:
# Compute enriched metrics (existing logic)
enriched_telemetry = {...}
# Build race context
race_context = {
"race_info": {
"track_name": track_name,
"total_laps": total_laps,
"current_lap": lap,
"weather_condition": weather_condition,
"track_temp_celsius": track_temp
},
"driver_state": {
"driver_name": driver_name,
"current_position": current_position,
"current_tire_compound": normalized_tire,
"tire_age_laps": tire_life_laps,
"fuel_remaining_percent": fuel_level * 100.0
},
"competitors": self._generate_mock_competitors(...)
}
return {
"enriched_telemetry": enriched_telemetry,
"race_context": race_context
}
```
**Updated API Endpoint:**
```python
@app.post("/ingest/telemetry")
async def ingest_telemetry(payload: Dict[str, Any] = Body(...)):
normalized = normalize_telemetry(payload)
result = _enricher.enrich_with_context(normalized) # New method
# Forward to AI layer with complete context
if _CALLBACK_URL:
await client.post(_CALLBACK_URL, json=result)
return JSONResponse(result)
```
---
## Additional Features
### Competitor Generation
Mock competitor data is generated for testing purposes:
- Positions around the driver (±3 positions)
- Realistic gaps based on position delta
- Varied tire strategies and ages
- Driver names from F1 roster
### Data Normalization
Extended adapter to handle multiple field aliases:
- `lap_number``lap`
- `track_temperature``track_temp`
- `tire_life_laps` → handled correctly
- Fuel level conversion: 0-1 range → 0-100 percentage
### Backward Compatibility
- Legacy `enrich()` method still available
- Manual `/api/strategy/brainstorm` endpoint still works
- Scripts work with or without race context fields
---
## Testing
### Unit Tests
- `tests/test_enrichment.py`: Tests for new `enrich_with_context()` method
- `tests/test_integration.py`: End-to-end integration tests
### Integration Test
- `test_integration_live.py`: Live test script for running services
All tests pass ✅
---
## Data Flow
### Before:
```
Pi → Enrichment → AI Layer (manual brainstorm call)
(7 metrics) (requires race_context from somewhere)
```
### After:
```
Pi → Enrichment → AI Layer (auto-brainstorm)
(raw + context) (enriched + context)
Strategies
```
---
## Usage Example
**1. Start Services:**
```bash
# Terminal 1: Enrichment Service
export NEXT_STAGE_CALLBACK_URL=http://localhost:9000/api/ingest/enriched
uvicorn hpcsim.api:app --port 8000
# Terminal 2: AI Intelligence Layer
cd ai_intelligence_layer
uvicorn main:app --port 9000
```
**2. Stream Telemetry:**
```bash
python scripts/simulate_pi_stream.py \
--data ALONSO_2023_MONZA_RACE \
--endpoint http://localhost:8000/ingest/telemetry \
--speed 10.0
```
**3. Observe:**
- Enrichment service processes telemetry + builds race context
- Webhooks sent to AI layer with complete data
- AI layer auto-generates strategies (after lap 3)
- Strategies returned in webhook response
---
## Verification
Run the live integration test:
```bash
python test_integration_live.py
```
This will:
1. Check both services are running
2. Send 5 laps of telemetry with race context
3. Verify enrichment output structure
4. Test manual brainstorm endpoint
5. Display sample strategy output
---
## Benefits
**Automatic Processing**: No manual endpoint calls needed
**Complete Context**: All required data in one webhook
**Real-time**: Strategies generated as telemetry arrives
**Stateful**: Enricher maintains race state across laps
**Type-Safe**: Pydantic models ensure data validity
**Backward Compatible**: Existing code continues to work
**Well-Tested**: Comprehensive unit and integration tests
---
## Next Steps (Optional Enhancements)
1. **Real Competitor Data**: Replace mock competitor generation with actual race data
2. **Position Tracking**: Track position changes over laps
3. **Strategy Caching**: Cache generated strategies to avoid regeneration
4. **Webhooks Metrics**: Add monitoring for webhook delivery success
5. **Database Storage**: Persist enriched telemetry and strategies

View File

@@ -1,238 +0,0 @@
# ✅ IMPLEMENTATION COMPLETE
## Tasks Completed
### ✅ Task 1: Auto-Trigger Strategy Brainstorming
**Requirement:** The AI Intelligence Layer's `/api/ingest/enriched` endpoint should receive `race_context` and `enriched_telemetry`, and periodically call the brainstorm logic automatically.
**Implementation:**
- Updated `/api/ingest/enriched` endpoint to accept `EnrichedTelemetryWithContext` model
- Automatically triggers strategy brainstorming when buffer has ≥3 laps of data
- Returns generated strategies in webhook response
- No manual endpoint calls needed
**Files Modified:**
- `ai_intelligence_layer/models/input_models.py` - Added `EnrichedTelemetryWithContext` model
- `ai_intelligence_layer/main.py` - Updated webhook endpoint with auto-brainstorm logic
---
### ✅ Task 2: Complete Race Context Output
**Requirement:** The enrichment stage should output all data expected by the AI Intelligence Layer, including `race_context` (race_info, driver_state, competitors).
**Implementation:**
- Added `enrich_with_context()` method to Enricher class
- Builds complete race context from available telemetry data
- Outputs both enriched telemetry (7 metrics) AND race context
- Webhook forwards complete payload to AI layer
**Files Modified:**
- `hpcsim/enrichment.py` - Added `enrich_with_context()` method and race context building
- `hpcsim/adapter.py` - Extended field normalization for race context fields
- `hpcsim/api.py` - Updated to use new enrichment method
- `scripts/simulate_pi_stream.py` - Added race context fields to telemetry
- `scripts/enrich_telemetry.py` - Added `--full-context` flag
---
## Verification Results
### ✅ All Tests Pass (6/6)
```
tests/test_enrichment.py::test_basic_ranges PASSED
tests/test_enrichment.py::test_enrich_with_context PASSED
tests/test_enrichment.py::test_stateful_wear_increases PASSED
tests/test_integration.py::test_fuel_level_conversion PASSED
tests/test_integration.py::test_pi_to_enrichment_flow PASSED
tests/test_integration.py::test_webhook_payload_structure PASSED
```
### ✅ Integration Validation Passed
```
✅ Task 1: AI layer webhook receives enriched_telemetry + race_context
✅ Task 2: Enrichment outputs all expected fields
✅ All data transformations working correctly
✅ All pieces fit together properly
```
### ✅ No Syntax Errors
All Python files compile successfully.
---
## Data Flow (Verified)
```
Pi Simulator (raw telemetry + race context)
Enrichment Service (:8000)
• Normalize telemetry
• Compute 7 enriched metrics
• Build race context
AI Intelligence Layer (:9000) via webhook
• Store enriched_telemetry
• Update race_context
• Auto-brainstorm (≥3 laps)
• Return strategies
```
---
## Output Structure (Verified)
### Enrichment → AI Layer Webhook
```json
{
"enriched_telemetry": {
"lap": 15,
"aero_efficiency": 0.633,
"tire_degradation_index": 0.011,
"ers_charge": 0.57,
"fuel_optimization_score": 0.76,
"driver_consistency": 1.0,
"weather_impact": "low"
},
"race_context": {
"race_info": {
"track_name": "Monza",
"total_laps": 51,
"current_lap": 15,
"weather_condition": "Dry",
"track_temp_celsius": 42.5
},
"driver_state": {
"driver_name": "Alonso",
"current_position": 5,
"current_tire_compound": "medium",
"tire_age_laps": 12,
"fuel_remaining_percent": 65.0
},
"competitors": [...]
}
}
```
### AI Layer → Response
```json
{
"status": "received_and_processed",
"lap": 15,
"buffer_size": 15,
"strategies_generated": 20,
"strategies": [...]
}
```
---
## Key Features Implemented
**Automatic Processing**
- No manual endpoint calls required
- Auto-triggers after 3 laps of data
**Complete Context**
- All 7 enriched telemetry fields
- Complete race_info (track, laps, weather)
- Complete driver_state (position, tires, fuel)
- Competitor data (mock-generated)
**Data Transformations**
- Tire compound normalization (SOFT → soft, inter → intermediate)
- Fuel level conversion (0-1 → 0-100%)
- Field alias handling (lap_number → lap, etc.)
**Backward Compatibility**
- Legacy `enrich()` method still works
- Manual `/api/strategy/brainstorm` endpoint still available
- Existing tests continue to pass
**Type Safety**
- Pydantic models validate all data
- Proper error handling and fallbacks
**Well Tested**
- Unit tests for enrichment
- Integration tests for end-to-end flow
- Live validation script
---
## Documentation Provided
1.`INTEGRATION_UPDATES.md` - Detailed technical documentation
2.`CHANGES_SUMMARY.md` - Executive summary of changes
3.`QUICK_REFERENCE.md` - Quick reference guide
4.`validate_integration.py` - Comprehensive validation script
5.`test_integration_live.py` - Live service testing
6. ✅ Updated tests in `tests/` directory
---
## Correctness Guarantees
**Structural Correctness**
- All required fields present in output
- Correct data types (Pydantic validation)
- Proper nesting of objects
**Data Correctness**
- Field mappings verified
- Value transformations tested
- Range validations in place
**Integration Correctness**
- End-to-end flow tested
- Webhook payload validated
- Auto-trigger logic verified
**Backward Compatibility**
- Legacy methods still work
- Existing code unaffected
- All original tests pass
---
## How to Run
### Start Services
```bash
# Terminal 1: Enrichment
export NEXT_STAGE_CALLBACK_URL=http://localhost:9000/api/ingest/enriched
uvicorn hpcsim.api:app --port 8000
# Terminal 2: AI Layer
cd ai_intelligence_layer && uvicorn main:app --port 9000
```
### Stream Telemetry
```bash
python scripts/simulate_pi_stream.py \
--data ALONSO_2023_MONZA_RACE \
--endpoint http://localhost:8000/ingest/telemetry \
--speed 10.0
```
### Validate
```bash
# Unit & integration tests
python -m pytest tests/test_enrichment.py tests/test_integration.py -v
# Comprehensive validation
python validate_integration.py
```
---
## Summary
Both tasks have been completed successfully with:
- ✅ Correct implementation
- ✅ Comprehensive testing
- ✅ Full documentation
- ✅ Backward compatibility
- ✅ Type safety
- ✅ Verified integration
All pieces fit together properly and work as expected! 🎉

View File

@@ -1,262 +0,0 @@
# Integration Updates - Enrichment to AI Intelligence Layer
## Overview
This document describes the updates made to integrate the HPC enrichment stage with the AI Intelligence Layer for automatic strategy generation.
## Changes Summary
### 1. AI Intelligence Layer (`/api/ingest/enriched` endpoint)
**Previous behavior:**
- Received only enriched telemetry data
- Stored data in buffer
- Required manual calls to `/api/strategy/brainstorm` endpoint
**New behavior:**
- Receives **both** enriched telemetry AND race context
- Stores telemetry in buffer AND updates global race context
- **Automatically triggers strategy brainstorming** when sufficient data is available (≥3 laps)
- Returns generated strategies in the webhook response
**Updated Input Model:**
```python
class EnrichedTelemetryWithContext(BaseModel):
enriched_telemetry: EnrichedTelemetryWebhook
race_context: RaceContext
```
**Response includes:**
- `status`: Processing status
- `lap`: Current lap number
- `buffer_size`: Number of telemetry records in buffer
- `strategies_generated`: Number of strategies created (if auto-brainstorm triggered)
- `strategies`: List of strategy objects (if auto-brainstorm triggered)
### 2. Enrichment Stage Output
**Previous output (enriched telemetry only):**
```json
{
"lap": 27,
"aero_efficiency": 0.83,
"tire_degradation_index": 0.65,
"ers_charge": 0.72,
"fuel_optimization_score": 0.91,
"driver_consistency": 0.89,
"weather_impact": "low"
}
```
**New output (enriched telemetry + race context):**
```json
{
"enriched_telemetry": {
"lap": 27,
"aero_efficiency": 0.83,
"tire_degradation_index": 0.65,
"ers_charge": 0.72,
"fuel_optimization_score": 0.91,
"driver_consistency": 0.89,
"weather_impact": "low"
},
"race_context": {
"race_info": {
"track_name": "Monza",
"total_laps": 51,
"current_lap": 27,
"weather_condition": "Dry",
"track_temp_celsius": 42.5
},
"driver_state": {
"driver_name": "Alonso",
"current_position": 5,
"current_tire_compound": "medium",
"tire_age_laps": 12,
"fuel_remaining_percent": 65.0
},
"competitors": [
{
"position": 4,
"driver": "Sainz",
"tire_compound": "medium",
"tire_age_laps": 10,
"gap_seconds": -2.3
},
// ... more competitors
]
}
}
```
### 3. Modified Components
#### `hpcsim/enrichment.py`
- Added `enrich_with_context()` method (new primary method)
- Maintains backward compatibility with `enrich()` (legacy method)
- Builds complete race context including:
- Race information (track, laps, weather)
- Driver state (position, tires, fuel)
- Competitor data (mock generation for testing)
#### `hpcsim/adapter.py`
- Extended to normalize additional fields:
- `track_name`
- `total_laps`
- `driver_name`
- `current_position`
- `tire_life_laps`
- `rainfall`
#### `hpcsim/api.py`
- Updated `/ingest/telemetry` endpoint to use `enrich_with_context()`
- Webhook now sends complete payload with enriched telemetry + race context
#### `scripts/simulate_pi_stream.py`
- Updated to include race context fields in telemetry data:
- `track_name`: "Monza"
- `driver_name`: "Alonso"
- `current_position`: 5
- `fuel_level`: Calculated based on lap progress
#### `scripts/enrich_telemetry.py`
- Added `--full-context` flag for outputting complete race context
- Default behavior unchanged (backward compatible)
#### `ai_intelligence_layer/main.py`
- Updated `/api/ingest/enriched` endpoint to:
- Accept `EnrichedTelemetryWithContext` model
- Store race context globally
- Auto-trigger strategy brainstorming with ≥3 laps of data
- Return strategies in webhook response
#### `ai_intelligence_layer/models/input_models.py`
- Added `EnrichedTelemetryWithContext` model
## Usage
### Running the Full Pipeline
1. **Start the enrichment service:**
```bash
export NEXT_STAGE_CALLBACK_URL=http://localhost:9000/api/ingest/enriched
uvicorn hpcsim.api:app --host 0.0.0.0 --port 8000
```
2. **Start the AI Intelligence Layer:**
```bash
cd ai_intelligence_layer
uvicorn main:app --host 0.0.0.0 --port 9000
```
3. **Stream telemetry data:**
```bash
python scripts/simulate_pi_stream.py \
--data ALONSO_2023_MONZA_RACE \
--endpoint http://localhost:8000/ingest/telemetry \
--speed 10.0
```
### What Happens
1. Pi simulator sends raw telemetry to enrichment service (port 8000)
2. Enrichment service:
- Normalizes telemetry
- Enriches with HPC metrics
- Builds race context
- Forwards to AI layer webhook (port 9000)
3. AI Intelligence Layer:
- Receives enriched telemetry + race context
- Stores in buffer
- **Automatically generates strategies** when buffer has ≥3 laps
- Returns strategies in webhook response
### Manual Testing
Test enrichment with context:
```bash
echo '{"lap":10,"speed":280,"throttle":0.85,"brake":0.05,"tire_compound":"medium","fuel_level":0.7,"track_temp":42.5,"total_laps":51,"track_name":"Monza","driver_name":"Alonso","current_position":5,"tire_life_laps":8}' | \
python scripts/enrich_telemetry.py --full-context
```
Test webhook directly:
```bash
curl -X POST http://localhost:9000/api/ingest/enriched \
-H "Content-Type: application/json" \
-d '{
"enriched_telemetry": {
"lap": 15,
"aero_efficiency": 0.85,
"tire_degradation_index": 0.3,
"ers_charge": 0.75,
"fuel_optimization_score": 0.9,
"driver_consistency": 0.88,
"weather_impact": "low"
},
"race_context": {
"race_info": {
"track_name": "Monza",
"total_laps": 51,
"current_lap": 15,
"weather_condition": "Dry",
"track_temp_celsius": 42.5
},
"driver_state": {
"driver_name": "Alonso",
"current_position": 5,
"current_tire_compound": "medium",
"tire_age_laps": 10,
"fuel_remaining_percent": 70.0
},
"competitors": []
}
}'
```
## Testing
Run all tests:
```bash
python -m pytest tests/ -v
```
Specific test files:
```bash
# Unit tests for enrichment
python -m pytest tests/test_enrichment.py -v
# Integration tests
python -m pytest tests/test_integration.py -v
```
## Backward Compatibility
- The legacy `enrich()` method still works and returns only enriched metrics
- The `/api/strategy/brainstorm` endpoint can still be called manually
- Scripts work with or without race context fields
- Existing tests continue to pass
## Key Benefits
1. **Automatic Strategy Generation**: No manual endpoint calls needed
2. **Complete Context**: AI layer receives all necessary data in one webhook
3. **Real-time Processing**: Strategies generated as telemetry arrives
4. **Stateful Enrichment**: Enricher maintains race state across laps
5. **Realistic Competitor Data**: Mock competitors generated for testing
6. **Type Safety**: Pydantic models ensure data validity
## Data Flow
```
Pi/Simulator → Enrichment Service → AI Intelligence Layer
(raw) (enrich + context) (auto-brainstorm)
Strategies
```
## Notes
- **Minimum buffer size**: AI layer waits for ≥3 laps before auto-brainstorming
- **Competitor data**: Currently mock-generated; can be replaced with real data
- **Fuel conversion**: Automatically converts 0-1 range to 0-100 percentage
- **Tire normalization**: Maps all tire compound variations to standard names
- **Weather detection**: Based on `rainfall` boolean and temperature

View File

@@ -1,213 +0,0 @@
# Quick Reference: Integration Changes
## 🎯 What Was Done
### Task 1: Auto-Trigger Strategy Brainstorming ✅
- **File**: `ai_intelligence_layer/main.py`
- **Endpoint**: `/api/ingest/enriched`
- **Change**: Now receives `enriched_telemetry` + `race_context` and automatically calls brainstorm logic
- **Trigger**: Auto-brainstorms when buffer has ≥3 laps
- **Output**: Returns generated strategies in webhook response
### Task 2: Complete Race Context Output ✅
- **File**: `hpcsim/enrichment.py`
- **Method**: New `enrich_with_context()` method
- **Output**: Both enriched telemetry (7 fields) AND race context (race_info + driver_state + competitors)
- **Integration**: Seamlessly flows from enrichment → AI layer
---
## 📋 Modified Files
### Core Changes
1.`hpcsim/enrichment.py` - Added `enrich_with_context()` method
2.`hpcsim/adapter.py` - Extended field normalization
3.`hpcsim/api.py` - Updated to output full context
4.`ai_intelligence_layer/main.py` - Auto-trigger brainstorm
5.`ai_intelligence_layer/models/input_models.py` - New webhook model
### Supporting Changes
6.`scripts/simulate_pi_stream.py` - Added race context fields
7.`scripts/enrich_telemetry.py` - Added `--full-context` flag
### Testing
8.`tests/test_enrichment.py` - Added context tests
9.`tests/test_integration.py` - New integration tests (3 tests)
10.`test_integration_live.py` - Live testing script
### Documentation
11.`INTEGRATION_UPDATES.md` - Detailed documentation
12.`CHANGES_SUMMARY.md` - Executive summary
---
## 🧪 Verification
### All Tests Pass
```bash
python -m pytest tests/test_enrichment.py tests/test_integration.py -v
# Result: 6 passed in 0.01s ✅
```
### No Syntax Errors
```bash
python -m py_compile hpcsim/enrichment.py hpcsim/adapter.py hpcsim/api.py
python -m py_compile ai_intelligence_layer/main.py ai_intelligence_layer/models/input_models.py
# All files compile successfully ✅
```
---
## 🔄 Data Flow
```
┌─────────────────┐
│ Pi Simulator │
│ (Raw Data) │
└────────┬────────┘
│ POST /ingest/telemetry
│ {lap, speed, throttle, tire_compound,
│ total_laps, track_name, driver_name, ...}
┌─────────────────────────────────────┐
│ Enrichment Service (Port 8000) │
│ • Normalize telemetry │
│ • Compute HPC metrics │
│ • Build race context │
└────────┬────────────────────────────┘
│ Webhook POST /api/ingest/enriched
│ {enriched_telemetry: {...}, race_context: {...}}
┌─────────────────────────────────────┐
│ AI Intelligence Layer (Port 9000) │
│ • Store in buffer │
│ • Update race context │
│ • Auto-trigger brainstorm (≥3 laps)│
│ • Generate 20 strategies │
└────────┬────────────────────────────┘
│ Response
│ {status, strategies: [...]}
[Strategies Available]
```
---
## 📊 Output Structure
### Enrichment Output
```json
{
"enriched_telemetry": {
"lap": 15,
"aero_efficiency": 0.85,
"tire_degradation_index": 0.3,
"ers_charge": 0.75,
"fuel_optimization_score": 0.9,
"driver_consistency": 0.88,
"weather_impact": "low"
},
"race_context": {
"race_info": {
"track_name": "Monza",
"total_laps": 51,
"current_lap": 15,
"weather_condition": "Dry",
"track_temp_celsius": 42.5
},
"driver_state": {
"driver_name": "Alonso",
"current_position": 5,
"current_tire_compound": "medium",
"tire_age_laps": 10,
"fuel_remaining_percent": 70.0
},
"competitors": [...]
}
}
```
### Webhook Response (from AI Layer)
```json
{
"status": "received_and_processed",
"lap": 15,
"buffer_size": 15,
"strategies_generated": 20,
"strategies": [
{
"strategy_id": 1,
"strategy_name": "Conservative Medium-Hard",
"stop_count": 1,
"pit_laps": [32],
"tire_sequence": ["medium", "hard"],
"brief_description": "...",
"risk_level": "low",
"key_assumption": "..."
},
...
]
}
```
---
## 🚀 Quick Start
### Start Both Services
```bash
# Terminal 1: Enrichment
export NEXT_STAGE_CALLBACK_URL=http://localhost:9000/api/ingest/enriched
uvicorn hpcsim.api:app --port 8000
# Terminal 2: AI Layer
cd ai_intelligence_layer && uvicorn main:app --port 9000
# Terminal 3: Stream Data
python scripts/simulate_pi_stream.py \
--data ALONSO_2023_MONZA_RACE \
--endpoint http://localhost:8000/ingest/telemetry \
--speed 10.0
```
### Watch the Magic ✨
- Lap 1-2: Telemetry ingested, buffered
- Lap 3+: Auto-brainstorm triggered, strategies generated!
- Check AI layer logs for strategy output
---
## ✅ Correctness Guarantees
1. **Type Safety**: All data validated by Pydantic models
2. **Field Mapping**: Comprehensive alias handling in adapter
3. **Data Conversion**: Fuel 0-1 → 0-100%, tire normalization
4. **State Management**: Enricher maintains state across laps
5. **Error Handling**: Graceful fallbacks if brainstorm fails
6. **Backward Compatibility**: Legacy methods still work
7. **Test Coverage**: 6 tests covering all critical paths
---
## 📌 Key Points
**Automatic**: No manual API calls needed
**Complete**: All race context included
**Tested**: All tests pass
**Compatible**: Existing code unaffected
**Documented**: Comprehensive docs provided
**Correct**: Type-safe, validated data flow
---
## 🎓 Implementation Notes
- **Minimum Buffer**: Waits for 3 laps before auto-brainstorm
- **Competitors**: Mock-generated (can be replaced with real data)
- **Webhook**: Enrichment → AI layer (push model)
- **Fallback**: AI layer can still pull from enrichment service
- **State**: Enricher tracks race state, tire changes, consistency
---
**Everything is working correctly and all pieces fit together! ✨**

View File

@@ -1,333 +0,0 @@
# System Architecture & Data Flow
## High-Level Architecture
```
┌─────────────────────────────────────────────────────────────────┐
│ F1 Race Strategy System │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Raw Race │ │ HPC Compute │ │ Enrichment │
│ Telemetry │────────▶│ Cluster │────────▶│ Module │
│ │ │ │ │ (port 8000) │
└─────────────────┘ └─────────────────┘ └────────┬────────┘
│ POST webhook
│ (enriched data)
┌─────────────────────────────────────────────┐
│ AI Intelligence Layer (port 9000) │
│ ┌─────────────────────────────────────┐ │
│ │ Step 1: Strategy Brainstorming │ │
│ │ - Generate 20 diverse strategies │ │
│ │ - Temperature: 0.9 (creative) │ │
│ └─────────────────────────────────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────────────────────────────┐ │
│ │ Step 2: Strategy Analysis │ │
│ │ - Select top 3 strategies │ │
│ │ - Temperature: 0.3 (analytical) │ │
│ └─────────────────────────────────────┘ │
│ │
│ Powered by: Google Gemini 1.5 Pro │
└──────────────────┬──────────────────────────┘
│ Strategic recommendations
┌─────────────────────────────────────────┐
│ Race Engineers / Frontend │
│ - Win probabilities │
│ - Risk assessments │
│ - Engineer briefs │
│ - Driver radio scripts │
│ - ECU commands │
└─────────────────────────────────────────┘
```
## Data Flow - Detailed
```
1. ENRICHED TELEMETRY INPUT
┌────────────────────────────────────────────────────────────────┐
│ { │
│ "lap": 27, │
│ "aero_efficiency": 0.83, // 0-1, higher = better │
│ "tire_degradation_index": 0.65, // 0-1, higher = worse │
│ "ers_charge": 0.72, // 0-1, energy available │
│ "fuel_optimization_score": 0.91,// 0-1, efficiency │
│ "driver_consistency": 0.89, // 0-1, lap-to-lap variance │
│ "weather_impact": "medium" // low/medium/high │
│ } │
└────────────────────────────────────────────────────────────────┘
2. RACE CONTEXT INPUT
┌────────────────────────────────────────────────────────────────┐
│ { │
│ "race_info": { │
│ "track_name": "Monaco", │
│ "current_lap": 27, │
│ "total_laps": 58 │
│ }, │
│ "driver_state": { │
│ "driver_name": "Hamilton", │
│ "current_position": 4, │
│ "current_tire_compound": "medium", │
│ "tire_age_laps": 14 │
│ }, │
│ "competitors": [...] │
│ } │
└────────────────────────────────────────────────────────────────┘
3. TELEMETRY ANALYSIS
┌────────────────────────────────────────────────────────────────┐
│ • Calculate tire degradation rate: 0.030/lap │
│ • Project tire cliff: Lap 33 │
│ • Analyze ERS pattern: stable │
│ • Assess fuel situation: OK │
│ • Evaluate driver form: excellent │
└────────────────────────────────────────────────────────────────┘
4. STEP 1: BRAINSTORM (Gemini AI)
┌────────────────────────────────────────────────────────────────┐
│ Temperature: 0.9 (high creativity) │
│ Prompt includes: │
│ • Last 10 laps telemetry │
│ • Calculated trends │
│ • Race constraints │
│ • Competitor analysis │
│ │
│ Output: 20 diverse strategies │
│ • Conservative (1-stop, low risk) │
│ • Standard (balanced approach) │
│ • Aggressive (undercut/overcut) │
│ • Reactive (respond to competitors) │
│ • Contingency (safety car, rain) │
└────────────────────────────────────────────────────────────────┘
5. STRATEGY VALIDATION
┌────────────────────────────────────────────────────────────────┐
│ • Pit laps within valid range │
│ • At least 2 tire compounds (F1 rule) │
│ • Stop count matches pit laps │
│ • Tire sequence correct length │
└────────────────────────────────────────────────────────────────┘
6. STEP 2: ANALYZE (Gemini AI)
┌────────────────────────────────────────────────────────────────┐
│ Temperature: 0.3 (analytical consistency) │
│ Analysis framework: │
│ 1. Tire degradation projection │
│ 2. Aero efficiency impact │
│ 3. Fuel management │
│ 4. Driver consistency │
│ 5. Weather & track position │
│ 6. Competitor analysis │
│ │
│ Selection criteria: │
│ • Rank 1: RECOMMENDED (highest podium %) │
│ • Rank 2: ALTERNATIVE (viable backup) │
│ • Rank 3: CONSERVATIVE (safest) │
└────────────────────────────────────────────────────────────────┘
7. FINAL OUTPUT
┌────────────────────────────────────────────────────────────────┐
│ For EACH of top 3 strategies: │
│ │
│ • Predicted Outcome │
│ - Finish position: P3 │
│ - P1 probability: 8% │
│ - P2 probability: 22% │
│ - P3 probability: 45% │
│ - Confidence: 78% │
│ │
│ • Risk Assessment │
│ - Risk level: medium │
│ - Key risks: ["Pit under 2.5s", "Traffic"] │
│ - Success factors: ["Tire advantage", "Window open"] │
│ │
│ • Telemetry Insights │
│ - "Tire cliff at lap 35" │
│ - "Aero 0.83 - performing well" │
│ - "Fuel excellent, no saving" │
│ - "Driver form excellent" │
│ │
│ • Engineer Brief │
│ - Title: "Aggressive Undercut Lap 28" │
│ - Summary: "67% chance P3 or better" │
│ - Key points: [...] │
│ - Execution steps: [...] │
│ │
│ • Driver Audio Script │
│ "Box this lap. Softs going on. Push mode." │
│ │
│ • ECU Commands │
│ - Fuel: RICH │
│ - ERS: AGGRESSIVE_DEPLOY │
│ - Engine: PUSH │
│ │
│ • Situational Context │
│ - "Decision needed in 2 laps" │
│ - "Tire deg accelerating" │
└────────────────────────────────────────────────────────────────┘
```
## API Endpoints Detail
```
┌─────────────────────────────────────────────────────────────────┐
│ GET /api/health │
├─────────────────────────────────────────────────────────────────┤
│ Purpose: Health check │
│ Response: {status, version, demo_mode} │
│ Latency: <100ms │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ POST /api/ingest/enriched │
├─────────────────────────────────────────────────────────────────┤
│ Purpose: Webhook receiver from enrichment service │
│ Input: Single lap enriched telemetry │
│ Action: Store in buffer (max 100 records) │
│ Response: {status, lap, buffer_size} │
│ Latency: <50ms │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ POST /api/strategy/brainstorm │
├─────────────────────────────────────────────────────────────────┤
│ Purpose: Generate 20 diverse strategies │
│ Input: │
│ - enriched_telemetry (optional, auto-fetch if missing) │
│ - race_context (required) │
│ Process: │
│ 1. Fetch telemetry if needed │
│ 2. Build prompt with telemetry analysis │
│ 3. Call Gemini (temp=0.9) │
│ 4. Parse & validate strategies │
│ Output: {strategies: [20 strategies]} │
│ Latency: <5s (target) │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ POST /api/strategy/analyze │
├─────────────────────────────────────────────────────────────────┤
│ Purpose: Analyze 20 strategies, select top 3 │
│ Input: │
│ - enriched_telemetry (optional, auto-fetch if missing) │
│ - race_context (required) │
│ - strategies (required, typically 20) │
│ Process: │
│ 1. Fetch telemetry if needed │
│ 2. Build analytical prompt │
│ 3. Call Gemini (temp=0.3) │
│ 4. Parse nested response structures │
│ Output: │
│ - top_strategies: [3 detailed strategies] │
│ - situational_context: {...} │
│ Latency: <10s (target) │
└─────────────────────────────────────────────────────────────────┘
```
## Integration Patterns
### Pattern 1: Pull Model
```
Enrichment Service (8000) ←─────GET /enriched───── AI Layer (9000)
[polls periodically]
```
### Pattern 2: Push Model (RECOMMENDED)
```
Enrichment Service (8000) ─────POST /ingest/enriched────▶ AI Layer (9000)
[webhook on new data]
```
### Pattern 3: Direct Request
```
Client ──POST /brainstorm──▶ AI Layer (9000)
[includes telemetry]
```
## Error Handling Flow
```
Request
┌─────────────────┐
│ Validate Input │
└────────┬────────┘
┌─────────────────┐ NO ┌──────────────────┐
│ Telemetry │────────────▶│ Fetch from │
│ Provided? │ │ localhost:8000 │
└────────┬────────┘ └────────┬─────────┘
YES │ │
└───────────────┬───────────────┘
┌──────────────┐
│ Call Gemini │
└──────┬───────┘
┌────┴────┐
│ Success?│
└────┬────┘
YES │ NO
│ │
│ ▼
│ ┌────────────────┐
│ │ Retry with │
│ │ stricter prompt│
│ └────────┬───────┘
│ │
│ ┌────┴────┐
│ │Success? │
│ └────┬────┘
│ YES │ NO
│ │ │
└───────────┤ │
│ ▼
│ ┌────────────┐
│ │ Return │
│ │ Error 500 │
│ └────────────┘
┌──────────────┐
│ Return │
│ Success 200 │
└──────────────┘
```
## Performance Characteristics
| Component | Target | Typical | Max |
|-----------|--------|---------|-----|
| Health check | <100ms | 50ms | 200ms |
| Webhook ingest | <50ms | 20ms | 100ms |
| Brainstorm (20 strategies) | <5s | 3-4s | 10s |
| Analyze (top 3) | <10s | 6-8s | 20s |
| Gemini API call | <3s | 2s | 8s |
| Telemetry fetch | <500ms | 200ms | 1s |
## Scalability Considerations
- **Concurrent Requests**: FastAPI async handles multiple simultaneously
- **Rate Limiting**: Gemini API has quotas (check your tier)
- **Caching**: Demo mode caches identical requests
- **Buffer Size**: Webhook buffer limited to 100 records
- **Memory**: ~100MB per service instance
---
Built for the HPC + AI Race Strategy Hackathon 🏎️

View File

@@ -1,207 +0,0 @@
# ⚡ SIMPLIFIED & FAST AI Layer
## What Changed
Simplified the entire AI flow for **ultra-fast testing and development**:
### Before (Slow)
- Generate 20 strategies (~45-60 seconds)
- Analyze all 20 and select top 3 (~40-60 seconds)
- **Total: ~2 minutes per request** ❌
### After (Fast)
- Generate **1 strategy** (~5-10 seconds)
- **Skip analysis** completely
- **Total: ~10 seconds per request** ✅
## Configuration
### Current Settings (`.env`)
```bash
FAST_MODE=true
STRATEGY_COUNT=1 # ⚡ Set to 1 for testing, 20 for production
```
### How to Adjust
**For ultra-fast testing (current):**
```bash
STRATEGY_COUNT=1
```
**For demo/showcase:**
```bash
STRATEGY_COUNT=5
```
**For production:**
```bash
STRATEGY_COUNT=20
```
## Simplified Workflow
```
┌──────────────────┐
│ Enrichment │
│ Service POSTs │
│ telemetry │
└────────┬─────────┘
┌──────────────────┐
│ Webhook Buffer │
│ (stores data) │
└────────┬─────────┘
┌──────────────────┐
│ Brainstorm │ ⚡ 1 strategy only!
│ (Gemini API) │ ~10 seconds
└────────┬─────────┘
┌──────────────────┐
│ Return Strategy │
│ No analysis! │
└──────────────────┘
```
## Quick Test
### 1. Push telemetry via webhook
```bash
python3 test_webhook_push.py --loop 5
```
### 2. Generate strategy (fast!)
```bash
python3 test_buffer_usage.py
```
**Output:**
```
Testing FAST brainstorm with buffered telemetry...
(Configured for 1 strategy only - ultra fast!)
✓ Brainstorm succeeded!
Generated 1 strategy
Strategy:
1. Medium-to-Hard Standard (1-stop)
Tires: medium → hard
Optimal 1-stop at lap 32 when tire degradation reaches cliff
✓ SUCCESS: AI layer is using webhook buffer!
```
**Time: ~10 seconds** instead of 2 minutes!
## API Response Format
### Brainstorm Response (Simplified)
```json
{
"strategies": [
{
"strategy_id": 1,
"strategy_name": "Medium-to-Hard Standard",
"stop_count": 1,
"pit_laps": [32],
"tire_sequence": ["medium", "hard"],
"brief_description": "Optimal 1-stop at lap 32 when tire degradation reaches cliff",
"risk_level": "medium",
"key_assumption": "No safety car interventions"
}
]
}
```
**No analysis object!** Just the strategy/strategies.
## What Was Removed
**Analysis endpoint** - Skipped entirely for speed
**Top 3 selection** - Only 1 strategy generated
**Detailed rationale** - Simple description only
**Risk assessment details** - Basic risk level only
**Engineer briefs** - Not generated
**Radio scripts** - Not generated
**ECU commands** - Not generated
## What Remains
**Webhook push** - Still works perfectly
**Buffer storage** - Still stores telemetry
**Strategy generation** - Just faster (1 instead of 20)
**F1 rule validation** - Still validates tire compounds
**Telemetry analysis** - Still calculates tire cliff, degradation
## Re-enabling Full Mode
When you need the complete system (for demos/production):
### 1. Update `.env`
```bash
STRATEGY_COUNT=20
```
### 2. Restart service
```bash
# Service will auto-reload if running with uvicorn --reload
# Or manually restart:
python main.py
```
### 3. Use analysis endpoint
```bash
# After brainstorm, call analyze with the 20 strategies
POST /api/strategy/analyze
{
"race_context": {...},
"strategies": [...], # 20 strategies from brainstorm
"enriched_telemetry": [...] # optional
}
```
## Performance Comparison
| Mode | Strategies | Time | Use Case |
|------|-----------|------|----------|
| **Ultra Fast** | 1 | ~10s | Testing, development |
| **Fast** | 5 | ~20s | Quick demos |
| **Standard** | 10 | ~35s | Demos with variety |
| **Full** | 20 | ~60s | Production, full analysis |
## Benefits of Simplified Flow
**Faster iteration** - Test webhook integration quickly
**Lower API costs** - Fewer Gemini API calls
**Easier debugging** - Simpler responses to inspect
**Better dev experience** - No waiting 2 minutes per test
**Still validates** - All core logic still works
## Migration Path
### Phase 1: Testing (Now)
- Use `STRATEGY_COUNT=1`
- Test webhook integration
- Verify telemetry flow
- Debug any issues
### Phase 2: Demo
- Set `STRATEGY_COUNT=5`
- Show variety of strategies
- Still fast enough for live demos
### Phase 3: Production
- Set `STRATEGY_COUNT=20`
- Enable analysis endpoint
- Full feature set
---
**Current Status:** ⚡ Ultra-fast mode enabled!
**Response Time:** ~10 seconds (was ~2 minutes)
**Ready for:** Rapid testing and webhook integration validation

View File

@@ -1,381 +0,0 @@
# AI Intelligence Layer - Implementation Summary
## 🎉 PROJECT COMPLETE
The AI Intelligence Layer has been successfully built and tested! This is the **core innovation** of your F1 race strategy system.
---
## 📦 What Was Built
### ✅ Core Components
1. **FastAPI Service (main.py)**
- Running on port 9000
- 4 endpoints: health, ingest webhook, brainstorm, analyze
- Full CORS support
- Comprehensive error handling
2. **Data Models (models/)**
- `input_models.py`: Request schemas for telemetry and race context
- `output_models.py`: Response schemas with 10+ nested structures
- `internal_models.py`: Internal processing models
3. **Gemini AI Integration (services/gemini_client.py)**
- Automatic JSON parsing with retry logic
- Error recovery with stricter prompts
- Demo mode caching for consistent results
- Configurable timeout and retry settings
4. **Telemetry Client (services/telemetry_client.py)**
- Fetches from enrichment service (localhost:8000)
- Health check integration
- Automatic fallback handling
5. **Strategy Services**
- `strategy_generator.py`: Brainstorm 20 diverse strategies
- `strategy_analyzer.py`: Select top 3 with detailed analysis
6. **Prompt Engineering (prompts/)**
- `brainstorm_prompt.py`: Creative strategy generation (temp 0.9)
- `analyze_prompt.py`: Analytical strategy selection (temp 0.3)
- Both include telemetry interpretation guides
7. **Utilities (utils/)**
- `validators.py`: Strategy validation + telemetry analysis
- `telemetry_buffer.py`: In-memory webhook data storage
8. **Sample Data & Tests**
- Sample enriched telemetry (10 laps)
- Sample race context (Monaco, Hamilton P4)
- Component test script
- API integration test script
---
## 🎯 Key Features Implemented
### Two-Step AI Strategy Process
**Step 1: Brainstorming** (POST /api/strategy/brainstorm)
- Generates 20 diverse strategies
- Categories: Conservative, Standard, Aggressive, Reactive, Contingency
- High creativity (temperature 0.9)
- Validates against F1 rules (min 2 tire compounds)
- Response time target: <5 seconds
**Step 2: Analysis** (POST /api/strategy/analyze)
- Analyzes all 20 strategies
- Selects top 3: RECOMMENDED, ALTERNATIVE, CONSERVATIVE
- Low temperature (0.3) for consistency
- Provides:
- Predicted race outcomes with probabilities
- Risk assessments
- Telemetry insights
- Engineer briefs
- Driver radio scripts
- ECU commands
- Situational context
- Response time target: <10 seconds
### Telemetry Intelligence
The system interprets 6 enriched metrics:
- **Aero Efficiency**: Car performance (<0.6 = problem)
- **Tire Degradation**: Wear rate (>0.85 = cliff imminent)
- **ERS Charge**: Energy availability (>0.7 = can attack)
- **Fuel Optimization**: Efficiency (<0.7 = must save)
- **Driver Consistency**: Reliability (<0.75 = risky)
- **Weather Impact**: Severity (high = flexible strategy)
### Smart Features
1. **Automatic Telemetry Fetching**: If not provided, fetches from enrichment service
2. **Webhook Support**: Real-time push from enrichment module
3. **Trend Analysis**: Calculates degradation rates, projects tire cliff
4. **Strategy Validation**: Ensures legal strategies per F1 rules
5. **Demo Mode**: Caches responses for consistent demos
6. **Retry Logic**: Handles Gemini API failures gracefully
---
## 🔧 Integration Points
### Upstream (HPC Enrichment Module)
```
http://localhost:8000/enriched?limit=10
```
**Pull model**: AI layer fetches telemetry
**Push model (IMPLEMENTED)**:
```bash
# In enrichment service .env:
NEXT_STAGE_CALLBACK_URL=http://localhost:9000/api/ingest/enriched
```
Enrichment service pushes to AI layer webhook
### Downstream (Frontend/Display)
```
http://localhost:9000/api/strategy/brainstorm
http://localhost:9000/api/strategy/analyze
```
---
## 📊 Testing Results
### Component Tests ✅
```
✓ Parsed 10 telemetry records
✓ Parsed race context for Hamilton
✓ Tire degradation rate: 0.0300 per lap
✓ Aero efficiency average: 0.840
✓ ERS pattern: stable
✓ Projected tire cliff: Lap 33
✓ Strategy validation working correctly
✓ Telemetry summary generation working
✓ Generated brainstorm prompt (4877 characters)
```
All data models, validators, and prompt generation working perfectly!
---
## 🚀 How to Use
### 1. Setup (One-time)
```bash
cd ai_intelligence_layer
# Already done:
# - Virtual environment created (myenv)
# - Dependencies installed
# - .env file created
# YOU NEED TO DO:
# Add your Gemini API key to .env
nano .env
# Replace: GEMINI_API_KEY=your_gemini_api_key_here
```
Get a Gemini API key: https://makersuite.google.com/app/apikey
### 2. Start the Service
```bash
# Option 1: Direct
cd ai_intelligence_layer
source myenv/bin/activate
python main.py
# Option 2: With uvicorn
uvicorn main:app --host 0.0.0.0 --port 9000 --reload
```
### 3. Test the Service
```bash
# Quick health check
curl http://localhost:9000/api/health
# Full integration test
./test_api.sh
# Manual test
curl -X POST http://localhost:9000/api/strategy/brainstorm \
-H "Content-Type: application/json" \
-d @- << EOF
{
"enriched_telemetry": $(cat sample_data/sample_enriched_telemetry.json),
"race_context": $(cat sample_data/sample_race_context.json)
}
EOF
```
---
## 📁 Project Structure
```
ai_intelligence_layer/
├── main.py # FastAPI app ✅
├── config.py # Settings ✅
├── requirements.txt # Dependencies ✅
├── .env # Configuration ✅
├── .env.example # Template ✅
├── README.md # Documentation ✅
├── test_api.sh # API tests ✅
├── test_components.py # Unit tests ✅
├── models/
│ ├── input_models.py # Request schemas ✅
│ ├── output_models.py # Response schemas ✅
│ └── internal_models.py # Internal models ✅
├── services/
│ ├── gemini_client.py # Gemini wrapper ✅
│ ├── telemetry_client.py # Enrichment API ✅
│ ├── strategy_generator.py # Brainstorm logic ✅
│ └── strategy_analyzer.py # Analysis logic ✅
├── prompts/
│ ├── brainstorm_prompt.py # Step 1 prompt ✅
│ └── analyze_prompt.py # Step 2 prompt ✅
├── utils/
│ ├── validators.py # Validation logic ✅
│ └── telemetry_buffer.py # Webhook buffer ✅
└── sample_data/
├── sample_enriched_telemetry.json ✅
└── sample_race_context.json ✅
```
**Total Files Created: 23**
**Lines of Code: ~3,500+**
---
## 🎨 Example Output
### Brainstorm Response (20 strategies)
```json
{
"strategies": [
{
"strategy_id": 1,
"strategy_name": "Conservative 1-Stop",
"stop_count": 1,
"pit_laps": [35],
"tire_sequence": ["medium", "hard"],
"risk_level": "low",
...
},
// ... 19 more
]
}
```
### Analyze Response (Top 3 with full details)
```json
{
"top_strategies": [
{
"rank": 1,
"classification": "RECOMMENDED",
"predicted_outcome": {
"finish_position_most_likely": 3,
"p1_probability": 8,
"p3_probability": 45,
"confidence_score": 78
},
"engineer_brief": {
"title": "Aggressive Undercut Lap 28",
"summary": "67% chance P3 or better",
"execution_steps": [...]
},
"driver_audio_script": "Box this lap. Softs going on...",
"ecu_commands": {
"fuel_mode": "RICH",
"ers_strategy": "AGGRESSIVE_DEPLOY",
"engine_mode": "PUSH"
}
},
// ... 2 more strategies
],
"situational_context": {
"critical_decision_point": "Next 3 laps crucial",
"time_sensitivity": "Decision needed within 2 laps"
}
}
```
---
## 🏆 Innovation Highlights
### What Makes This Special
1. **Real HPC Integration**: Uses actual enriched telemetry from HPC simulations
2. **Dual-LLM Process**: Brainstorm diversity + analytical selection
3. **Telemetry Intelligence**: Interprets metrics to project tire cliffs, fuel needs
4. **Production-Ready**: Validation, error handling, retry logic, webhooks
5. **Race-Ready Output**: Includes driver radio scripts, ECU commands, engineer briefs
6. **F1 Rule Compliance**: Validates tire compound rules, pit window constraints
### Technical Excellence
- **Pydantic Models**: Full type safety and validation
- **Async/Await**: Non-blocking API calls
- **Smart Fallbacks**: Auto-fetch telemetry if not provided
- **Configurable**: Temperature, timeouts, retry logic all adjustable
- **Demo Mode**: Repeatable results for presentations
- **Comprehensive Testing**: Component tests + integration tests
---
## 🐛 Known Limitations
1. **Requires Gemini API Key**: Must configure before use
2. **Enrichment Service Dependency**: Best with localhost:8000 running
3. **Single Race Support**: Designed for one race at a time
4. **English Only**: Prompts and outputs in English
---
## 🔜 Next Steps
### To Deploy This
1. Add your Gemini API key to `.env`
2. Ensure enrichment service is running on port 8000
3. Start this service: `python main.py`
4. Test with: `./test_api.sh`
### To Enhance (Future)
- Multi-race session management
- Historical strategy learning
- Real-time streaming updates
- Frontend dashboard integration
- Multi-language support
---
## 📞 Troubleshooting
### "Import errors" in IDE
- This is normal - dependencies installed in `myenv`
- Run from terminal with venv activated
- Or configure IDE to use `myenv/bin/python`
### "Enrichment service unreachable"
- Either start enrichment service on port 8000
- Or provide telemetry data directly in requests
### "Gemini API error"
- Check API key in `.env`
- Verify API quota: https://makersuite.google.com
- Check network connectivity
---
## ✨ Summary
You now have a **fully functional AI Intelligence Layer** that:
✅ Receives enriched telemetry from HPC simulations
✅ Generates 20 diverse race strategies using AI
✅ Analyzes and selects top 3 with detailed rationale
✅ Provides actionable outputs (radio scripts, ECU commands)
✅ Integrates via REST API and webhooks
✅ Validates strategies against F1 rules
✅ Handles errors gracefully with retry logic
✅ Includes comprehensive documentation and tests
**This is hackathon-ready and demo-ready!** 🏎️💨
Just add your Gemini API key and you're good to go!
---
Built with ❤️ for the HPC + AI Race Strategy Hackathon

View File

@@ -1,131 +0,0 @@
# 🚀 Quick Start Guide - AI Intelligence Layer
## ⚡ 60-Second Setup
### 1. Get Gemini API Key
Visit: https://makersuite.google.com/app/apikey
### 2. Configure
```bash
cd ai_intelligence_layer
nano .env
# Add your API key: GEMINI_API_KEY=your_key_here
```
### 3. Run
```bash
source myenv/bin/activate
python main.py
```
Service starts on: http://localhost:9000
---
## 🧪 Quick Test
### Health Check
```bash
curl http://localhost:9000/api/health
```
### Full Test
```bash
./test_api.sh
```
---
## 📡 API Endpoints
| Endpoint | Method | Purpose |
|----------|--------|---------|
| `/api/health` | GET | Health check |
| `/api/ingest/enriched` | POST | Webhook receiver |
| `/api/strategy/brainstorm` | POST | Generate 20 strategies |
| `/api/strategy/analyze` | POST | Select top 3 |
---
## 🔗 Integration
### With Enrichment Service (localhost:8000)
**Option 1: Pull** (AI fetches)
```bash
# In enrichment service, AI will auto-fetch from:
# http://localhost:8000/enriched?limit=10
```
**Option 2: Push** (Webhook - RECOMMENDED)
```bash
# In enrichment service .env:
NEXT_STAGE_CALLBACK_URL=http://localhost:9000/api/ingest/enriched
```
---
## 📦 What You Get
### Input
- Enriched telemetry (aero, tires, ERS, fuel, consistency)
- Race context (track, position, competitors)
### Output
- **20 diverse strategies** (conservative → aggressive)
- **Top 3 analyzed** with:
- Win probabilities
- Risk assessment
- Engineer briefs
- Driver radio scripts
- ECU commands
---
## 🎯 Example Usage
### Brainstorm
```bash
curl -X POST http://localhost:9000/api/strategy/brainstorm \
-H "Content-Type: application/json" \
-d '{
"race_context": {
"race_info": {"track_name": "Monaco", "current_lap": 27, "total_laps": 58},
"driver_state": {"driver_name": "Hamilton", "current_position": 4}
}
}'
```
### Analyze
```bash
curl -X POST http://localhost:9000/api/strategy/analyze \
-H "Content-Type: application/json" \
-d '{
"race_context": {...},
"strategies": [...]
}'
```
---
## 🐛 Troubleshooting
| Issue | Solution |
|-------|----------|
| API key error | Add `GEMINI_API_KEY` to `.env` |
| Enrichment unreachable | Start enrichment service or provide telemetry data |
| Import errors | Activate venv: `source myenv/bin/activate` |
---
## 📚 Documentation
- **Full docs**: `README.md`
- **Implementation details**: `IMPLEMENTATION_SUMMARY.md`
- **Sample data**: `sample_data/`
---
## ✅ Status
All systems operational! Ready to generate race strategies! 🏎️💨

View File

@@ -1,294 +0,0 @@
# Race Context Guide
## Why Race Context is Separate from Telemetry
**Enrichment Service** (port 8000):
- Provides: **Enriched telemetry** (changes every lap)
- Example: tire degradation, aero efficiency, ERS charge
**Client/Frontend**:
- Provides: **Race context** (changes less frequently)
- Example: driver name, current position, track info, competitors
This separation is intentional:
- Telemetry changes **every lap** (real-time HPC data)
- Race context changes **occasionally** (position changes, pit stops)
- Keeps enrichment service simple and focused
## How to Call Brainstorm with Both
### Option 1: Client Provides Both (Recommended)
```bash
curl -X POST http://localhost:9000/api/strategy/brainstorm \
-H "Content-Type: application/json" \
-d '{
"enriched_telemetry": [
{
"lap": 27,
"aero_efficiency": 0.85,
"tire_degradation_index": 0.72,
"ers_charge": 0.78,
"fuel_optimization_score": 0.82,
"driver_consistency": 0.88,
"weather_impact": "low"
}
],
"race_context": {
"race_info": {
"track_name": "Monaco",
"current_lap": 27,
"total_laps": 58,
"weather_condition": "Dry",
"track_temp_celsius": 42
},
"driver_state": {
"driver_name": "Hamilton",
"current_position": 4,
"current_tire_compound": "medium",
"tire_age_laps": 14,
"fuel_remaining_percent": 47
},
"competitors": []
}
}'
```
### Option 2: AI Layer Fetches Telemetry, Client Provides Context
```bash
# Enrichment service POSTs telemetry to webhook
# Then client calls:
curl -X POST http://localhost:9000/api/strategy/brainstorm \
-H "Content-Type: application/json" \
-d '{
"race_context": {
"race_info": {...},
"driver_state": {...},
"competitors": []
}
}'
```
AI layer will use telemetry from:
1. **Buffer** (if webhook has pushed data) ← CURRENT SETUP
2. **GET /enriched** from enrichment service (fallback)
## Creating a Race Context Template
Here's a reusable template:
```json
{
"race_context": {
"race_info": {
"track_name": "Monaco",
"current_lap": 27,
"total_laps": 58,
"weather_condition": "Dry",
"track_temp_celsius": 42
},
"driver_state": {
"driver_name": "Hamilton",
"current_position": 4,
"current_tire_compound": "medium",
"tire_age_laps": 14,
"fuel_remaining_percent": 47
},
"competitors": [
{
"position": 1,
"driver": "Verstappen",
"tire_compound": "hard",
"tire_age_laps": 18,
"gap_seconds": -12.5
},
{
"position": 2,
"driver": "Leclerc",
"tire_compound": "medium",
"tire_age_laps": 10,
"gap_seconds": -5.2
},
{
"position": 3,
"driver": "Norris",
"tire_compound": "medium",
"tire_age_laps": 12,
"gap_seconds": -2.1
},
{
"position": 5,
"driver": "Sainz",
"tire_compound": "soft",
"tire_age_laps": 5,
"gap_seconds": 3.8
}
]
}
}
```
## Where Does Race Context Come From?
In a real system, race context typically comes from:
1. **Timing System** - Official F1 timing data
- Current positions
- Gap times
- Lap numbers
2. **Team Database** - Historical race data
- Track information
- Total laps for this race
- Weather forecasts
3. **Pit Wall** - Live observations
- Competitor tire strategies
- Weather conditions
- Track temperature
4. **Telemetry Feed** - Some data overlaps
- Driver's current tires
- Tire age
- Fuel remaining
## Recommended Architecture
```
┌─────────────────────┐
│ Timing System │
│ (Race Control) │
└──────────┬──────────┘
┌─────────────────────┐ ┌─────────────────────┐
│ Frontend/Client │ │ Enrichment Service │
│ │ │ (Port 8000) │
│ Manages: │ │ │
│ - Race context │ │ Manages: │
│ - UI state │ │ - Telemetry │
│ - User inputs │ │ - HPC enrichment │
└──────────┬──────────┘ └──────────┬──────────┘
│ │
│ │ POST /ingest/enriched
│ │ (telemetry only)
│ ▼
│ ┌─────────────────────┐
│ │ AI Layer Buffer │
│ │ (telemetry only) │
│ └─────────────────────┘
│ │
│ POST /api/strategy/brainstorm │
│ (race_context + telemetry) │
└───────────────────────────────┤
┌─────────────────────┐
│ AI Strategy Layer │
│ (Port 9000) │
│ │
│ Generates 3 │
│ strategies │
└─────────────────────┘
```
## Python Example: Calling with Race Context
```python
import httpx
async def get_race_strategies(race_context: dict):
"""
Get strategies from AI layer.
Args:
race_context: Current race state
Returns:
3 strategies with pit plans and risk assessments
"""
url = "http://localhost:9000/api/strategy/brainstorm"
payload = {
"race_context": race_context
# enriched_telemetry is optional - AI will use buffer or fetch
}
async with httpx.AsyncClient(timeout=60.0) as client:
response = await client.post(url, json=payload)
response.raise_for_status()
return response.json()
# Usage:
race_context = {
"race_info": {
"track_name": "Monaco",
"current_lap": 27,
"total_laps": 58,
"weather_condition": "Dry",
"track_temp_celsius": 42
},
"driver_state": {
"driver_name": "Hamilton",
"current_position": 4,
"current_tire_compound": "medium",
"tire_age_laps": 14,
"fuel_remaining_percent": 47
},
"competitors": []
}
strategies = await get_race_strategies(race_context)
print(f"Generated {len(strategies['strategies'])} strategies")
```
## Alternative: Enrichment Service Sends Full Payload
If you really want enrichment service to send race context too, you'd need to:
### 1. Store race context in enrichment service
```python
# In hpcsim/api.py
_race_context = {
"race_info": {...},
"driver_state": {...},
"competitors": []
}
@app.post("/set_race_context")
async def set_race_context(context: Dict[str, Any]):
"""Update race context (call this when race state changes)."""
global _race_context
_race_context = context
return {"status": "ok"}
```
### 2. Send both in webhook
```python
# In ingest_telemetry endpoint
if _CALLBACK_URL:
payload = {
"enriched_telemetry": [enriched],
"race_context": _race_context
}
await client.post(_CALLBACK_URL, json=payload)
```
### 3. Update AI webhook to handle full payload
But this adds complexity. **I recommend keeping it simple**: client provides race_context when calling brainstorm.
---
## Current Working Setup
**Enrichment service** → POSTs telemetry to `/api/ingest/enriched`
**AI layer** → Stores telemetry in buffer
**Client** → Calls `/api/strategy/brainstorm` with race_context
**AI layer** → Uses buffer telemetry + provided race_context → Generates strategies
This is clean, simple, and follows single responsibility principle!

View File

@@ -1,290 +0,0 @@
# 🚀 Quick Start: Full System Test
## Overview
Test the complete webhook integration flow:
1. **Enrichment Service** (port 8000) - Receives telemetry, enriches it, POSTs to AI layer
2. **AI Intelligence Layer** (port 9000) - Receives enriched data, generates 3 strategies
## Step-by-Step Testing
### 1. Start the Enrichment Service (Port 8000)
From the **project root** (`HPCSimSite/`):
```bash
# Option A: Using the serve script
python3 scripts/serve.py
```
**Or from any directory:**
```bash
cd /Users/rishubmadhav/Documents/GitHub/HPCSimSite
python3 -m uvicorn hpcsim.api:app --host 0.0.0.0 --port 8000
```
You should see:
```
INFO: Uvicorn running on http://0.0.0.0:8000
INFO: Application startup complete.
```
**Verify it's running:**
```bash
curl http://localhost:8000/healthz
# Should return: {"status":"ok","stored":0}
```
### 2. Configure Webhook Callback
The enrichment service needs to know where to send enriched data.
**Option A: Set environment variable (before starting)**
```bash
export NEXT_STAGE_CALLBACK_URL=http://localhost:9000/api/ingest/enriched
python3 scripts/serve.py
```
**Option B: For testing, manually POST enriched data**
You can skip the callback and use `test_webhook_push.py` to simulate it (already working!).
### 3. Start the AI Intelligence Layer (Port 9000)
In a **new terminal**, from `ai_intelligence_layer/`:
```bash
cd /Users/rishubmadhav/Documents/GitHub/HPCSimSite/ai_intelligence_layer
source myenv/bin/activate # Activate virtual environment
python main.py
```
You should see:
```
INFO - Starting AI Intelligence Layer on port 9000
INFO - Strategy count: 3
INFO - All services initialized successfully
INFO: Uvicorn running on http://0.0.0.0:9000
```
**Verify it's running:**
```bash
curl http://localhost:9000/api/health
```
### 4. Test the Webhook Flow
**Method 1: Simulate enrichment service (fastest)**
```bash
cd ai_intelligence_layer
python3 test_webhook_push.py --loop 5
```
Output:
```
✓ Posted lap 27 - Buffer size: 1 records
✓ Posted lap 28 - Buffer size: 2 records
...
Posted 5/5 records successfully
```
**Method 2: POST to enrichment service (full integration)**
POST raw telemetry to enrichment service, it will enrich and forward:
```bash
curl -X POST http://localhost:8000/ingest/telemetry \
-H "Content-Type: application/json" \
-d '{
"lap": 27,
"speed": 310,
"tire_temp": 95,
"fuel_level": 45
}'
```
*Note: This requires NEXT_STAGE_CALLBACK_URL to be set*
### 5. Generate Strategies
```bash
cd ai_intelligence_layer
python3 test_buffer_usage.py
```
Output:
```
Testing FAST brainstorm with buffered telemetry...
(Configured for 3 strategies - fast and diverse!)
✓ Brainstorm succeeded!
Generated 3 strategies
Saved to: /tmp/brainstorm_strategies.json
Strategies:
1. Conservative Stay Out (1-stop, low risk)
Tires: medium → hard
Pits at: laps [35]
Extend current stint then hard tires to end
2. Standard Undercut (1-stop, medium risk)
Tires: medium → hard
Pits at: laps [32]
Pit before tire cliff for track position
3. Aggressive Two-Stop (2-stop, high risk)
Tires: medium → soft → hard
Pits at: laps [30, 45]
Early pit for fresh rubber and pace advantage
✓ SUCCESS: AI layer is using webhook buffer!
Full JSON saved to /tmp/brainstorm_strategies.json
```
### 6. View the Results
```bash
cat /tmp/brainstorm_strategies.json | python3 -m json.tool
```
Or just:
```bash
cat /tmp/brainstorm_strategies.json
```
## Terminal Setup
Here's the recommended terminal layout:
```
┌─────────────────────────┬─────────────────────────┐
│ Terminal 1 │ Terminal 2 │
│ Enrichment Service │ AI Intelligence Layer │
│ (Port 8000) │ (Port 9000) │
│ │ │
│ $ cd HPCSimSite │ $ cd ai_intelligence... │
│ $ python3 scripts/ │ $ source myenv/bin/... │
│ serve.py │ $ python main.py │
│ │ │
│ Running... │ Running... │
└─────────────────────────┴─────────────────────────┘
┌───────────────────────────────────────────────────┐
│ Terminal 3 - Testing │
│ │
│ $ cd ai_intelligence_layer │
│ $ python3 test_webhook_push.py --loop 5 │
│ $ python3 test_buffer_usage.py │
└───────────────────────────────────────────────────┘
```
## Current Configuration
### Enrichment Service (Port 8000)
- **Endpoints:**
- `POST /ingest/telemetry` - Receive raw telemetry
- `POST /enriched` - Manually post enriched data
- `GET /enriched?limit=N` - Retrieve recent enriched records
- `GET /healthz` - Health check
### AI Intelligence Layer (Port 9000)
- **Endpoints:**
- `GET /api/health` - Health check
- `POST /api/ingest/enriched` - Webhook receiver (enrichment service POSTs here)
- `POST /api/strategy/brainstorm` - Generate 3 strategies
- ~~`POST /api/strategy/analyze`~~ - **DISABLED** for speed
- **Configuration:**
- `STRATEGY_COUNT=3` - Generates 3 strategies
- `FAST_MODE=true` - Uses shorter prompts
- Response time: ~15-20 seconds (was ~2 minutes with 20 strategies + analysis)
## Troubleshooting
### Enrichment service won't start
```bash
# Check if port 8000 is already in use
lsof -i :8000
# Kill existing process
kill -9 <PID>
# Or use a different port
python3 -m uvicorn hpcsim.api:app --host 0.0.0.0 --port 8001
```
### AI layer can't find enrichment service
If you see: `"Cannot connect to enrichment service at http://localhost:8000"`
**Solution:** The buffer is empty and it's trying to pull from enrichment service.
```bash
# Push some data via webhook first:
python3 test_webhook_push.py --loop 5
```
### Virtual environment issues
```bash
cd ai_intelligence_layer
# Check if venv exists
ls -la myenv/
# If missing, recreate:
python3 -m venv myenv
source myenv/bin/activate
pip install -r requirements.txt
```
### Module not found errors
```bash
# For enrichment service
cd /Users/rishubmadhav/Documents/GitHub/HPCSimSite
export PYTHONPATH=$PWD:$PYTHONPATH
python3 scripts/serve.py
# For AI layer
cd ai_intelligence_layer
source myenv/bin/activate
python main.py
```
## Full Integration Test Workflow
```bash
# Terminal 1: Start enrichment
cd /Users/rishubmadhav/Documents/GitHub/HPCSimSite
export NEXT_STAGE_CALLBACK_URL=http://localhost:9000/api/ingest/enriched
python3 scripts/serve.py
# Terminal 2: Start AI layer
cd /Users/rishubmadhav/Documents/GitHub/HPCSimSite/ai_intelligence_layer
source myenv/bin/activate
python main.py
# Terminal 3: Test webhook push
cd /Users/rishubmadhav/Documents/GitHub/HPCSimSite/ai_intelligence_layer
python3 test_webhook_push.py --loop 5
# Terminal 3: Generate strategies
python3 test_buffer_usage.py
# View results
cat /tmp/brainstorm_strategies.json | python3 -m json.tool
```
## What's Next?
1.**Both services running** - Enrichment on 8000, AI on 9000
2.**Webhook tested** - Data flows from enrichment → AI layer
3.**Strategies generated** - 3 strategies in ~20 seconds
4. ⏭️ **Real telemetry** - Connect actual race data source
5. ⏭️ **Frontend** - Build UI to display strategies
6. ⏭️ **Production** - Increase to 20 strategies, enable analysis
---
**Status:** 🚀 Both services ready to run!
**Performance:** ~20 seconds for 3 strategies (vs 2+ minutes for 20 + analysis)
**Integration:** Webhook push working perfectly

View File

@@ -1,236 +0,0 @@
# ✅ AI Intelligence Layer - WORKING!
## 🎉 Success Summary
The AI Intelligence Layer is now **fully functional** and has been successfully tested!
### Test Results from Latest Run:
```
✓ Health Check: PASSED (200 OK)
✓ Brainstorm: PASSED (200 OK)
- Generated 19/20 strategies in 48 seconds
- 1 strategy filtered (didn't meet F1 tire compound rule)
- Fast mode working perfectly
✓ Service: RUNNING (port 9000)
```
## 📊 Performance Metrics
| Metric | Target | Actual | Status |
|--------|--------|--------|--------|
| Health check | <1s | <1s | ✅ |
| Brainstorm | 15-30s | 48s | ⚠️ Acceptable |
| Service uptime | Stable | Stable | ✅ |
| Fast mode | Enabled | Enabled | ✅ |
**Note:** 48s is slightly slower than the 15-30s target, but well within acceptable range. The Gemini API response time varies based on load.
## 🚀 How to Use
### 1. Start the Service
```bash
cd ai_intelligence_layer
source myenv/bin/activate
python main.py
```
### 2. Run Tests
**Best option - Python test script:**
```bash
python3 test_api.py
```
**Alternative - Shell script:**
```bash
./test_api.sh
```
### 3. Check Results
```bash
# View generated strategies
cat /tmp/brainstorm_result.json | python3 -m json.tool | head -50
# View analysis results
cat /tmp/analyze_result.json | python3 -m json.tool | head -100
```
## ✨ What's Working
### ✅ Core Features
- [x] FastAPI service on port 9000
- [x] Health check endpoint
- [x] Webhook receiver for enrichment data
- [x] Strategy brainstorming (20 diverse strategies)
- [x] Strategy analysis (top 3 selection)
- [x] Automatic telemetry fetching from enrichment service
- [x] F1 rule validation (tire compounds)
- [x] Fast mode for quicker responses
- [x] Retry logic with exponential backoff
- [x] Comprehensive error handling
### ✅ AI Features
- [x] Gemini 2.5 Flash integration
- [x] JSON response parsing
- [x] Prompt optimization (fast mode)
- [x] Strategy diversity (5 types)
- [x] Risk assessment
- [x] Telemetry interpretation
- [x] Tire cliff projection
- [x] Detailed analysis outputs
### ✅ Output Quality
- [x] Win probability predictions
- [x] Risk assessments
- [x] Engineer briefs
- [x] Driver radio scripts
- [x] ECU commands (fuel, ERS, engine modes)
- [x] Situational context
## 📝 Configuration
Current optimal settings in `.env`:
```bash
GEMINI_MODEL=gemini-2.5-flash # Fast, good quality
FAST_MODE=true # Optimized prompts
BRAINSTORM_TIMEOUT=90 # Sufficient time
ANALYZE_TIMEOUT=120 # Sufficient time
DEMO_MODE=false # Real-time mode
```
## 🎯 Next Steps
### For Demo/Testing:
1. ✅ Service is ready to use
2. ✅ Test scripts available
3. ⏭️ Try with different race scenarios
4. ⏭️ Test webhook integration with enrichment service
### For Production:
1. ⏭️ Set up monitoring/logging
2. ⏭️ Add rate limiting
3. ⏭️ Consider caching frequently requested strategies
4. ⏭️ Add authentication if exposing publicly
### Optional Enhancements:
1. ⏭️ Frontend dashboard
2. ⏭️ Real-time strategy updates during race
3. ⏭️ Historical strategy learning
4. ⏭️ Multi-driver support
## 🔧 Troubleshooting Guide
### Issue: "Connection refused"
**Solution:** Start the service
```bash
python main.py
```
### Issue: Slow responses (>60s)
**Solution:** Already fixed with:
- Fast mode enabled
- Increased timeouts
- Optimized prompts
### Issue: "422 Unprocessable Content"
**Solution:** Use `test_api.py` instead of `test_api.sh`
- Python script handles JSON properly
- No external dependencies
### Issue: Service crashes
**Solution:** Check logs
```bash
python main.py 2>&1 | tee ai_service.log
```
## 📚 Documentation
| File | Purpose |
|------|---------|
| `README.md` | Full documentation |
| `QUICKSTART.md` | 60-second setup |
| `TESTING.md` | Testing guide |
| `TIMEOUT_FIX.md` | Timeout resolution details |
| `ARCHITECTURE.md` | System architecture |
| `IMPLEMENTATION_SUMMARY.md` | Technical details |
## 🎓 Example Usage
### Manual API Call
```python
import requests
# Brainstorm
response = requests.post('http://localhost:9000/api/strategy/brainstorm', json={
"race_context": {
"race_info": {
"track_name": "Monaco",
"current_lap": 27,
"total_laps": 58,
"weather_condition": "Dry",
"track_temp_celsius": 42
},
"driver_state": {
"driver_name": "Hamilton",
"current_position": 4,
"current_tire_compound": "medium",
"tire_age_laps": 14,
"fuel_remaining_percent": 47
},
"competitors": [...]
}
})
strategies = response.json()['strategies']
print(f"Generated {len(strategies)} strategies")
```
## 🌟 Key Achievements
1. **Built from scratch** - Complete FastAPI application with AI integration
2. **Production-ready** - Error handling, validation, retry logic
3. **Well-documented** - 7 documentation files, inline comments
4. **Tested** - Component tests + integration tests passing
5. **Optimized** - Fast mode reduces response time significantly
6. **Flexible** - Webhook + polling support for enrichment data
7. **Smart** - Interprets telemetry, projects tire cliffs, validates F1 rules
8. **Complete** - All requirements from original spec implemented
## 📊 Files Created
- **Core:** 7 files (main, config, models)
- **Services:** 4 files (Gemini, telemetry, strategy generation/analysis)
- **Prompts:** 2 files (brainstorm, analyze)
- **Utils:** 2 files (validators, buffer)
- **Tests:** 3 files (component, API shell, API Python)
- **Docs:** 7 files (README, quickstart, testing, timeout fix, architecture, implementation, this file)
- **Config:** 3 files (.env, .env.example, requirements.txt)
- **Sample Data:** 2 files (telemetry, race context)
**Total: 30+ files, ~4,000+ lines of code**
## 🏁 Final Status
```
╔═══════════════════════════════════════════════╗
║ AI INTELLIGENCE LAYER - FULLY OPERATIONAL ║
║ ║
║ ✅ Service Running ║
║ ✅ Tests Passing ║
║ ✅ Fast Mode Working ║
║ ✅ Gemini Integration Working ║
║ ✅ Strategy Generation Working ║
║ ✅ Documentation Complete ║
║ ║
║ READY FOR HACKATHON! 🏎️💨 ║
╚═══════════════════════════════════════════════╝
```
---
**Built with ❤️ for the HPC + AI Race Strategy Hackathon**
Last updated: October 18, 2025
Version: 1.0.0
Status: ✅ Production Ready

View File

@@ -1,219 +0,0 @@
# Testing the AI Intelligence Layer
## Quick Test Options
### Option 1: Python Script (RECOMMENDED - No dependencies)
```bash
python3 test_api.py
```
**Advantages:**
- ✅ No external tools required
- ✅ Clear, formatted output
- ✅ Built-in error handling
- ✅ Works on all systems
### Option 2: Shell Script
```bash
./test_api.sh
```
**Note:** Uses pure Python for JSON processing (no `jq` required)
### Option 3: Manual Testing
#### Health Check
```bash
curl http://localhost:9000/api/health | python3 -m json.tool
```
#### Brainstorm Test
```bash
python3 << 'EOF'
import json
import urllib.request
# Load data
with open('sample_data/sample_enriched_telemetry.json') as f:
telemetry = json.load(f)
with open('sample_data/sample_race_context.json') as f:
context = json.load(f)
# Make request
data = json.dumps({
"enriched_telemetry": telemetry,
"race_context": context
}).encode('utf-8')
req = urllib.request.Request(
'http://localhost:9000/api/strategy/brainstorm',
data=data,
headers={'Content-Type': 'application/json'}
)
with urllib.request.urlopen(req, timeout=120) as response:
result = json.loads(response.read())
print(f"Generated {len(result['strategies'])} strategies")
for s in result['strategies'][:3]:
print(f"{s['strategy_id']}. {s['strategy_name']} - {s['risk_level']} risk")
EOF
```
## Expected Output
### Successful Test Run
```
======================================================================
AI Intelligence Layer - Test Suite
======================================================================
1. Testing health endpoint...
✓ Status: healthy
✓ Service: AI Intelligence Layer
✓ Demo mode: False
2. Testing brainstorm endpoint...
(This may take 15-30 seconds...)
✓ Generated 20 strategies in 18.3s
Sample strategies:
1. Conservative 1-Stop
Stops: 1, Risk: low
2. Standard Medium-Hard
Stops: 1, Risk: medium
3. Aggressive Undercut
Stops: 2, Risk: high
3. Testing analyze endpoint...
(This may take 20-40 seconds...)
✓ Analysis complete in 24.7s
Top 3 strategies:
1. Aggressive Undercut (RECOMMENDED)
Predicted: P3
P3 or better: 75%
Risk: medium
2. Standard Two-Stop (ALTERNATIVE)
Predicted: P4
P3 or better: 63%
Risk: medium
3. Conservative 1-Stop (CONSERVATIVE)
Predicted: P5
P3 or better: 37%
Risk: low
======================================================================
RECOMMENDED STRATEGY DETAILS:
======================================================================
Engineer Brief:
Undercut Leclerc on lap 32. 75% chance of P3 or better.
Driver Radio:
"Box this lap. Soft tires going on. Push mode for next 8 laps."
ECU Commands:
Fuel: RICH
ERS: AGGRESSIVE_DEPLOY
Engine: PUSH
======================================================================
======================================================================
✓ ALL TESTS PASSED!
======================================================================
Results saved to:
- /tmp/brainstorm_result.json
- /tmp/analyze_result.json
```
## Troubleshooting
### "Connection refused"
```bash
# Service not running. Start it:
python main.py
```
### "Timeout" errors
```bash
# Check .env settings:
cat .env | grep TIMEOUT
# Should see:
# BRAINSTORM_TIMEOUT=90
# ANALYZE_TIMEOUT=120
# Also check Fast Mode is enabled:
cat .env | grep FAST_MODE
# Should see: FAST_MODE=true
```
### "422 Unprocessable Content"
This usually means invalid JSON in the request. The new test scripts handle this automatically.
### Test takes too long
```bash
# Enable fast mode in .env:
FAST_MODE=true
# Restart service:
# Press Ctrl+C in the terminal running python main.py
# Then: python main.py
```
## Performance Benchmarks
With `FAST_MODE=true` and `gemini-2.5-flash`:
| Test | Expected Time | Status |
|------|--------------|--------|
| Health | <1s | ✅ |
| Brainstorm | 15-30s | ✅ |
| Analyze | 20-40s | ✅ |
| **Total** | **40-70s** | ✅ |
## Component Tests
To test just the data models and validators (no API calls):
```bash
python test_components.py
```
This runs instantly and doesn't require the Gemini API.
## Files Created During Tests
- `/tmp/test_request.json` - Brainstorm request payload
- `/tmp/brainstorm_result.json` - 20 generated strategies
- `/tmp/analyze_request.json` - Analyze request payload
- `/tmp/analyze_result.json` - Top 3 analyzed strategies
You can inspect these files to see the full API responses.
## Integration with Enrichment Service
If the enrichment service is running on `localhost:8000`, the AI layer will automatically fetch telemetry data when not provided in the request:
```bash
# Test without providing telemetry (will fetch from enrichment service)
curl -X POST http://localhost:9000/api/strategy/brainstorm \
-H "Content-Type: application/json" \
-d '{
"race_context": {
"race_info": {"track_name": "Monaco", "current_lap": 27, "total_laps": 58},
"driver_state": {"driver_name": "Hamilton", "current_position": 4}
}
}'
```
---
**Ready to test!** 🚀
Just run: `python3 test_api.py`

View File

@@ -1,179 +0,0 @@
# Timeout Fix Guide
## Problem
Gemini API timing out with 504 errors after ~30 seconds.
## Solution Applied ✅
### 1. Increased Timeouts
**File: `.env`**
```bash
BRAINSTORM_TIMEOUT=90 # Increased from 30s
ANALYZE_TIMEOUT=120 # Increased from 60s
```
### 2. Added Fast Mode
**File: `.env`**
```bash
FAST_MODE=true # Use shorter, optimized prompts
```
Fast mode reduces prompt length by ~60% while maintaining quality:
- Brainstorm: ~4900 chars → ~1200 chars
- Analyze: ~6500 chars → ~1800 chars
### 3. Improved Retry Logic
**File: `services/gemini_client.py`**
- Longer backoff for timeout errors (5s instead of 2s)
- Minimum timeout of 60s for API calls
- Better error detection
### 4. Model Selection
You're using `gemini-2.5-flash` which is good! It's:
- ✅ Faster than Pro
- ✅ Cheaper
- ✅ Good quality for this use case
## How to Use
### Option 1: Fast Mode (RECOMMENDED for demos)
```bash
# In .env
FAST_MODE=true
```
- Faster responses (~10-20s per call)
- Shorter prompts
- Still high quality
### Option 2: Full Mode (for production)
```bash
# In .env
FAST_MODE=false
```
- More detailed prompts
- Slightly better quality
- Slower (~30-60s per call)
## Testing
### Quick Test
```bash
# Check health
curl http://localhost:9000/api/health
# Test with sample data (fast mode)
curl -X POST http://localhost:9000/api/strategy/brainstorm \
-H "Content-Type: application/json" \
-d @- << EOF
{
"enriched_telemetry": $(cat sample_data/sample_enriched_telemetry.json),
"race_context": $(cat sample_data/sample_race_context.json)
}
EOF
```
## Troubleshooting
### Still getting timeouts?
**1. Check API quota**
- Visit: https://aistudio.google.com/apikey
- Check rate limits and quota
- Free tier: 15 requests/min, 1M tokens/min
**2. Try different model**
```bash
# In .env, try:
GEMINI_MODEL=gemini-1.5-flash # Fastest
# or
GEMINI_MODEL=gemini-1.5-pro # Better quality, slower
```
**3. Increase timeouts further**
```bash
# In .env
BRAINSTORM_TIMEOUT=180
ANALYZE_TIMEOUT=240
```
**4. Reduce strategy count**
If still timing out, you can modify the code to generate fewer strategies:
- Edit `prompts/brainstorm_prompt.py`
- Change "Generate 20 strategies" to "Generate 10 strategies"
### Network issues?
**Check connectivity:**
```bash
# Test Google AI endpoint
curl -I https://generativelanguage.googleapis.com
# Check if behind proxy
echo $HTTP_PROXY
echo $HTTPS_PROXY
```
**Use VPN if needed** - Some regions have restricted access to Google AI APIs
### Monitor performance
**Watch logs:**
```bash
# Start server with logs
python main.py 2>&1 | tee ai_layer.log
# In another terminal, watch for timeouts
tail -f ai_layer.log | grep -i timeout
```
## Performance Benchmarks
### Fast Mode (FAST_MODE=true)
- Brainstorm: ~15-25s
- Analyze: ~20-35s
- Total workflow: ~40-60s
### Full Mode (FAST_MODE=false)
- Brainstorm: ~30-50s
- Analyze: ~40-70s
- Total workflow: ~70-120s
## What Changed
### Before
```
Prompt: 4877 chars
Timeout: 30s
Result: ❌ 504 timeout errors
```
### After (Fast Mode)
```
Prompt: ~1200 chars (75% reduction)
Timeout: 90s
Result: ✅ Works reliably
```
## Configuration Summary
Your current setup:
```bash
GEMINI_MODEL=gemini-2.5-flash # Fast model
FAST_MODE=true # Optimized prompts
BRAINSTORM_TIMEOUT=90 # 3x increase
ANALYZE_TIMEOUT=120 # 2x increase
```
This should work reliably now! 🎉
## Additional Tips
1. **For demos**: Keep FAST_MODE=true
2. **For production**: Test with FAST_MODE=false, adjust timeouts as needed
3. **Monitor quota**: Check usage at https://aistudio.google.com
4. **Cache responses**: Enable DEMO_MODE=true for repeatable demos
---
**Status**: FIXED ✅
**Ready to test**: YES 🚀

View File

@@ -1,316 +0,0 @@
# Webhook Push Integration Guide
## Overview
The AI Intelligence Layer supports **two integration models** for receiving enriched telemetry:
1. **Push Model (Webhook)** - Enrichment service POSTs data to AI layer ✅ **RECOMMENDED**
2. **Pull Model** - AI layer fetches data from enrichment service (fallback)
## Push Model (Webhook) - How It Works
```
┌─────────────────────┐ ┌─────────────────────┐
│ HPC Enrichment │ POST │ AI Intelligence │
│ Service │────────▶│ Layer │
│ (Port 8000) │ │ (Port 9000) │
└─────────────────────┘ └─────────────────────┘
┌──────────────┐
│ Telemetry │
│ Buffer │
│ (in-memory) │
└──────────────┘
┌──────────────┐
│ Brainstorm │
│ & Analyze │
│ (Gemini AI) │
└──────────────┘
```
### Configuration
In your **enrichment service** (port 8000), set the callback URL:
```bash
export NEXT_STAGE_CALLBACK_URL=http://localhost:9000/api/ingest/enriched
```
When enrichment is complete for each lap, the service will POST to this endpoint.
### Webhook Endpoint
**Endpoint:** `POST /api/ingest/enriched`
**Request Body:** Single enriched telemetry record (JSON)
```json
{
"lap": 27,
"lap_time_seconds": 78.456,
"tire_degradation_index": 0.72,
"fuel_remaining_kg": 45.2,
"aero_efficiency": 0.85,
"ers_recovery_rate": 0.78,
"brake_wear_index": 0.65,
"fuel_optimization_score": 0.82,
"driver_consistency": 0.88,
"predicted_tire_cliff_lap": 35,
"weather_impact": "minimal",
"hpc_simulation_id": "sim_monaco_lap27_001",
"metadata": {
"simulation_timestamp": "2025-10-18T22:15:30Z",
"confidence_level": 0.92,
"cluster_nodes_used": 8
}
}
```
**Response:**
```json
{
"status": "received",
"lap": 27,
"buffer_size": 15
}
```
### Buffer Behavior
- **Max Size:** 100 records (configurable)
- **Storage:** In-memory (cleared on restart)
- **Retrieval:** FIFO - newest data returned first
- **Auto-cleanup:** Oldest records dropped when buffer is full
## Testing the Webhook
### 1. Start the AI Intelligence Layer
```bash
cd ai_intelligence_layer
source myenv/bin/activate # or your venv
python main.py
```
Verify it's running:
```bash
curl http://localhost:9000/api/health
```
### 2. Simulate Enrichment Service Pushing Data
**Option A: Using the test script**
```bash
# Post single telemetry record
python3 test_webhook_push.py
# Post 10 records with 2s delay between each
python3 test_webhook_push.py --loop 10 --delay 2
# Post 5 records with 1s delay
python3 test_webhook_push.py --loop 5 --delay 1
```
**Option B: Using curl**
```bash
curl -X POST http://localhost:9000/api/ingest/enriched \
-H "Content-Type: application/json" \
-d '{
"lap": 27,
"lap_time_seconds": 78.456,
"tire_degradation_index": 0.72,
"fuel_remaining_kg": 45.2,
"aero_efficiency": 0.85,
"ers_recovery_rate": 0.78,
"brake_wear_index": 0.65,
"fuel_optimization_score": 0.82,
"driver_consistency": 0.88,
"predicted_tire_cliff_lap": 35,
"weather_impact": "minimal",
"hpc_simulation_id": "sim_monaco_lap27_001",
"metadata": {
"simulation_timestamp": "2025-10-18T22:15:30Z",
"confidence_level": 0.92,
"cluster_nodes_used": 8
}
}'
```
### 3. Verify Buffer Contains Data
Check the logs - you should see:
```
INFO - Received enriched telemetry webhook: lap 27
INFO - Added telemetry for lap 27 (buffer size: 1)
```
### 4. Test Strategy Generation Using Buffered Data
**Brainstorm endpoint** (no telemetry in request = uses buffer):
```bash
curl -X POST http://localhost:9000/api/strategy/brainstorm \
-H "Content-Type: application/json" \
-d '{
"race_context": {
"race_info": {
"track_name": "Monaco",
"current_lap": 27,
"total_laps": 58,
"weather_condition": "Dry",
"track_temp_celsius": 42
},
"driver_state": {
"driver_name": "Hamilton",
"current_position": 4,
"current_tire_compound": "medium",
"tire_age_laps": 14,
"fuel_remaining_percent": 47
},
"competitors": []
}
}' | python3 -m json.tool
```
Check logs for:
```
INFO - Using 10 telemetry records from webhook buffer
```
## Pull Model (Fallback)
If the buffer is empty and no telemetry is provided in the request, the AI layer will **automatically fetch** from the enrichment service:
```bash
GET http://localhost:8000/enriched?limit=10
```
This ensures the system works even without webhook configuration.
## Priority Order
When brainstorm/analyze endpoints are called:
1. **Check request body** - Use `enriched_telemetry` if provided
2. **Check buffer** - Use webhook buffer if it has data
3. **Fetch from service** - Pull from enrichment service as fallback
4. **Error** - If all fail, return 400 error
## Production Recommendations
### For Enrichment Service
```bash
# Configure callback URL
export NEXT_STAGE_CALLBACK_URL=http://ai-layer:9000/api/ingest/enriched
# Add retry logic (recommended)
export CALLBACK_MAX_RETRIES=3
export CALLBACK_TIMEOUT=10
```
### For AI Layer
```python
# config.py - Increase buffer size for production
telemetry_buffer_max_size: int = 500 # Store more history
# Consider Redis for persistent buffer
# (current implementation is in-memory only)
```
### Health Monitoring
```bash
# Check buffer status
curl http://localhost:9000/api/health
# Response includes buffer info (could be added):
{
"status": "healthy",
"buffer_size": 25,
"buffer_max_size": 100
}
```
## Common Issues
### 1. Webhook Not Receiving Data
**Symptoms:** Buffer size stays at 0
**Solutions:**
- Verify enrichment service has `NEXT_STAGE_CALLBACK_URL` configured
- Check network connectivity between services
- Examine enrichment service logs for POST errors
- Confirm AI layer is running on port 9000
### 2. Old Data in Buffer
**Symptoms:** AI uses outdated telemetry
**Solutions:**
- Buffer is FIFO - automatically clears old data
- Restart AI service to clear buffer
- Increase buffer size if race generates data faster than consumption
### 3. Pull Model Used Instead of Push
**Symptoms:** Logs show "fetching from enrichment service" instead of "using buffer"
**Solutions:**
- Confirm webhook is posting data (check buffer size in logs)
- Verify webhook POST is successful (200 response)
- Check if buffer was cleared (restart)
## Integration Examples
### Python (Enrichment Service)
```python
import httpx
async def push_enriched_telemetry(telemetry_data: dict):
"""Push enriched telemetry to AI layer."""
url = "http://localhost:9000/api/ingest/enriched"
async with httpx.AsyncClient() as client:
response = await client.post(url, json=telemetry_data, timeout=10.0)
response.raise_for_status()
return response.json()
```
### Shell Script (Testing)
```bash
#!/bin/bash
# push_telemetry.sh
for lap in {1..10}; do
curl -X POST http://localhost:9000/api/ingest/enriched \
-H "Content-Type: application/json" \
-d "{\"lap\": $lap, \"tire_degradation_index\": 0.7, ...}"
sleep 2
done
```
## Benefits of Push Model
**Real-time** - AI layer receives data immediately as enrichment completes
**Efficient** - No polling, reduces load on enrichment service
**Decoupled** - Services don't need to coordinate timing
**Resilient** - Buffer allows AI to process multiple requests from same dataset
**Simple** - Enrichment service just POST and forget
---
**Next Steps:**
1. Configure `NEXT_STAGE_CALLBACK_URL` in enrichment service
2. Test webhook with `test_webhook_push.py`
3. Monitor logs to confirm push model is working
4. Run brainstorm/analyze and verify buffer usage

View File

@@ -1,200 +0,0 @@
# ✅ Webhook Push Integration - WORKING!
## Summary
Your AI Intelligence Layer now **supports webhook push integration** where the enrichment service POSTs telemetry data directly to the AI layer.
## What Was Changed
### 1. Enhanced Telemetry Priority (main.py)
Both `/api/strategy/brainstorm` and `/api/strategy/analyze` now check sources in this order:
1. **Request body** - If telemetry provided in request
2. **Webhook buffer** - If webhook has pushed data ✨ **NEW**
3. **Pull from service** - Fallback to GET http://localhost:8000/enriched
4. **Error** - If all sources fail
### 2. Test Scripts Created
- `test_webhook_push.py` - Simulates enrichment service POSTing telemetry
- `test_buffer_usage.py` - Verifies brainstorm uses buffered data
- `check_enriched.py` - Checks enrichment service for live data
### 3. Documentation
- `WEBHOOK_INTEGRATION.md` - Complete integration guide
## How It Works
```
Enrichment Service AI Intelligence Layer
(Port 8000) (Port 9000)
│ │
│ POST telemetry │
│──────────────────────────▶│
│ /api/ingest/enriched │
│ │
│ ✓ {status: "received"} │
│◀──────────────────────────│
│ │
┌──────────────┐
│ Buffer │
│ (5 records) │
└──────────────┘
User calls │
brainstorm │
(no telemetry) │
Uses buffer data!
```
## Quick Test (Just Completed! ✅)
### Step 1: Push telemetry via webhook
```bash
python3 test_webhook_push.py --loop 5 --delay 1
```
**Result:**
```
✓ Posted lap 27 - Buffer size: 1 records
✓ Posted lap 28 - Buffer size: 2 records
✓ Posted lap 29 - Buffer size: 3 records
✓ Posted lap 30 - Buffer size: 4 records
✓ Posted lap 31 - Buffer size: 5 records
Posted 5/5 records successfully
✓ Telemetry is now in the AI layer's buffer
```
### Step 2: Call brainstorm (will use buffer automatically)
```bash
python3 test_buffer_usage.py
```
This calls `/api/strategy/brainstorm` **without** providing telemetry in the request.
**Expected logs in AI service:**
```
INFO - Using 5 telemetry records from webhook buffer
INFO - Generated 20 strategies
```
## Configure Your Enrichment Service
In your enrichment service (port 8000), set the callback URL:
```bash
export NEXT_STAGE_CALLBACK_URL=http://localhost:9000/api/ingest/enriched
```
Then in your enrichment code:
```python
import httpx
async def send_enriched_telemetry(telemetry: dict):
"""Push enriched telemetry to AI layer."""
async with httpx.AsyncClient() as client:
response = await client.post(
"http://localhost:9000/api/ingest/enriched",
json=telemetry,
timeout=10.0
)
response.raise_for_status()
return response.json()
# After HPC enrichment completes for a lap:
await send_enriched_telemetry({
"lap": 27,
"aero_efficiency": 0.85,
"tire_degradation_index": 0.72,
"ers_charge": 0.78,
"fuel_optimization_score": 0.82,
"driver_consistency": 0.88,
"weather_impact": "low"
})
```
## Telemetry Model (Required Fields)
Your enrichment service must POST data matching this exact schema:
```json
{
"lap": 27,
"aero_efficiency": 0.85,
"tire_degradation_index": 0.72,
"ers_charge": 0.78,
"fuel_optimization_score": 0.82,
"driver_consistency": 0.88,
"weather_impact": "low"
}
```
**Field constraints:**
- All numeric fields: 0.0 to 1.0 (float)
- `weather_impact`: Must be "low", "medium", or "high" (string literal)
- `lap`: Integer > 0
## Benefits of Webhook Push Model
**Real-time** - AI receives data immediately as enrichment completes
**Efficient** - No polling overhead
**Decoupled** - Services operate independently
**Resilient** - Buffer allows multiple strategy requests from same dataset
**Automatic** - Brainstorm/analyze use buffer when no telemetry provided
## Verification Commands
### 1. Check webhook endpoint is working
```bash
curl -X POST http://localhost:9000/api/ingest/enriched \
-H "Content-Type: application/json" \
-d '{
"lap": 27,
"aero_efficiency": 0.85,
"tire_degradation_index": 0.72,
"ers_charge": 0.78,
"fuel_optimization_score": 0.82,
"driver_consistency": 0.88,
"weather_impact": "low"
}'
```
Expected response:
```json
{"status": "received", "lap": 27, "buffer_size": 1}
```
### 2. Check logs for buffer usage
When you call brainstorm/analyze, look for:
```
INFO - Using N telemetry records from webhook buffer
```
If buffer is empty:
```
INFO - No telemetry in buffer, fetching from enrichment service...
```
## Next Steps
1.**Webhook tested** - Successfully pushed 5 records
2. ⏭️ **Configure enrichment service** - Add NEXT_STAGE_CALLBACK_URL
3. ⏭️ **Test end-to-end** - Run enrichment → webhook → brainstorm
4. ⏭️ **Monitor logs** - Verify buffer usage in production
---
**Files created:**
- `test_webhook_push.py` - Webhook testing tool
- `test_buffer_usage.py` - Buffer verification tool
- `WEBHOOK_INTEGRATION.md` - Complete integration guide
- This summary
**Code modified:**
- `main.py` - Enhanced brainstorm/analyze to prioritize webhook buffer
- Both endpoints now check: request → buffer → fetch → error
**Status:** ✅ Webhook push model fully implemented and tested!

View File

@@ -2,12 +2,15 @@
AI Intelligence Layer - FastAPI Application AI Intelligence Layer - FastAPI Application
Port: 9000 Port: 9000
Provides F1 race strategy generation and analysis using Gemini AI. Provides F1 race strategy generation and analysis using Gemini AI.
Supports WebSocket connections from Pi for bidirectional control.
""" """
from fastapi import FastAPI, HTTPException, status from fastapi import FastAPI, HTTPException, status, WebSocket, WebSocketDisconnect
from fastapi.middleware.cors import CORSMiddleware from fastapi.middleware.cors import CORSMiddleware
from contextlib import asynccontextmanager from contextlib import asynccontextmanager
import logging import logging
from typing import Dict, Any import asyncio
import random
from typing import Dict, Any, List
from config import get_settings from config import get_settings
from models.input_models import ( from models.input_models import (
@@ -41,6 +44,37 @@ strategy_generator: StrategyGenerator = None
# strategy_analyzer: StrategyAnalyzer = None # Disabled - not using analysis # strategy_analyzer: StrategyAnalyzer = None # Disabled - not using analysis
telemetry_client: TelemetryClient = None telemetry_client: TelemetryClient = None
current_race_context: RaceContext = None # Store race context globally current_race_context: RaceContext = None # Store race context globally
last_control_command: Dict[str, int] = {"brake_bias": 5, "differential_slip": 5} # Store last command
# WebSocket connection manager
class ConnectionManager:
"""Manages WebSocket connections from Pi clients."""
def __init__(self):
self.active_connections: List[WebSocket] = []
async def connect(self, websocket: WebSocket):
await websocket.accept()
self.active_connections.append(websocket)
logger.info(f"Pi client connected. Total connections: {len(self.active_connections)}")
def disconnect(self, websocket: WebSocket):
self.active_connections.remove(websocket)
logger.info(f"Pi client disconnected. Total connections: {len(self.active_connections)}")
async def send_control_command(self, websocket: WebSocket, command: Dict[str, Any]):
"""Send control command to specific Pi client."""
await websocket.send_json(command)
async def broadcast_control_command(self, command: Dict[str, Any]):
"""Broadcast control command to all connected Pi clients."""
for connection in self.active_connections:
try:
await connection.send_json(command)
except Exception as e:
logger.error(f"Error broadcasting to client: {e}")
websocket_manager = ConnectionManager()
@asynccontextmanager @asynccontextmanager
@@ -263,6 +297,248 @@ async def analyze_strategies(request: AnalyzeRequest):
""" """
@app.websocket("/ws/pi")
async def websocket_pi_endpoint(websocket: WebSocket):
"""
WebSocket endpoint for Raspberry Pi clients.
Flow:
1. Pi connects and streams lap telemetry via WebSocket
2. AI layer processes telemetry and generates strategies
3. AI layer pushes control commands back to Pi (brake_bias, differential_slip)
"""
global current_race_context, last_control_command
await websocket_manager.connect(websocket)
# Clear telemetry buffer for fresh connection
# This ensures lap counting starts from scratch for each Pi session
telemetry_buffer.clear()
# Reset last control command to neutral for new session
last_control_command = {"brake_bias": 5, "differential_slip": 5}
logger.info("[WebSocket] Telemetry buffer cleared for new connection")
try:
# Send initial welcome message
await websocket.send_json({
"type": "connection_established",
"message": "Connected to AI Intelligence Layer",
"status": "ready",
"buffer_cleared": True
})
# Main message loop
while True:
# Receive telemetry from Pi
data = await websocket.receive_json()
message_type = data.get("type", "telemetry")
if message_type == "telemetry":
# Process incoming lap telemetry
lap_number = data.get("lap_number", 0)
# Store in buffer (convert to EnrichedTelemetryWebhook format)
# Note: This assumes data is already enriched. If raw, route through enrichment first.
enriched = data.get("enriched_telemetry")
race_context_data = data.get("race_context")
if enriched and race_context_data:
try:
# Parse enriched telemetry
enriched_obj = EnrichedTelemetryWebhook(**enriched)
telemetry_buffer.add(enriched_obj)
# Update race context
current_race_context = RaceContext(**race_context_data)
# Auto-generate strategies if we have enough data
buffer_data = telemetry_buffer.get_latest(limit=10)
if len(buffer_data) >= 3:
logger.info(f"\n{'='*60}")
logger.info(f"LAP {lap_number} - GENERATING STRATEGY")
logger.info(f"{'='*60}")
# Send immediate acknowledgment while processing
# Use last known control values instead of resetting to neutral
await websocket.send_json({
"type": "control_command",
"lap": lap_number,
"brake_bias": last_control_command["brake_bias"],
"differential_slip": last_control_command["differential_slip"],
"message": "Processing strategies (maintaining previous settings)..."
})
# Generate strategies (this is the slow part)
try:
response = await strategy_generator.generate(
enriched_telemetry=buffer_data,
race_context=current_race_context
)
# Extract top strategy (first one)
top_strategy = response.strategies[0] if response.strategies else None
# Generate control commands based on strategy
control_command = generate_control_command(
lap_number=lap_number,
strategy=top_strategy,
enriched_telemetry=enriched_obj,
race_context=current_race_context
)
# Update global last command
last_control_command = {
"brake_bias": control_command["brake_bias"],
"differential_slip": control_command["differential_slip"]
}
# Send updated control command with strategies
await websocket.send_json({
"type": "control_command_update",
"lap": lap_number,
"brake_bias": control_command["brake_bias"],
"differential_slip": control_command["differential_slip"],
"strategy_name": top_strategy.strategy_name if top_strategy else "N/A",
"total_strategies": len(response.strategies),
"reasoning": control_command.get("reasoning", "")
})
logger.info(f"{'='*60}\n")
except Exception as e:
logger.error(f"[WebSocket] Strategy generation failed: {e}")
# Send error but keep neutral controls
await websocket.send_json({
"type": "error",
"lap": lap_number,
"message": f"Strategy generation failed: {str(e)}"
})
else:
# Not enough data yet, send neutral command
await websocket.send_json({
"type": "control_command",
"lap": lap_number,
"brake_bias": 5, # Neutral
"differential_slip": 5, # Neutral
"message": f"Collecting data ({len(buffer_data)}/3 laps)"
})
except Exception as e:
logger.error(f"[WebSocket] Error processing telemetry: {e}")
await websocket.send_json({
"type": "error",
"message": str(e)
})
else:
logger.warning(f"[WebSocket] Received incomplete data from Pi")
elif message_type == "ping":
# Respond to ping
await websocket.send_json({"type": "pong"})
elif message_type == "disconnect":
# Graceful disconnect
logger.info("[WebSocket] Pi requested disconnect")
break
except WebSocketDisconnect:
logger.info("[WebSocket] Pi client disconnected")
except Exception as e:
logger.error(f"[WebSocket] Unexpected error: {e}")
finally:
websocket_manager.disconnect(websocket)
# Clear buffer when connection closes to ensure fresh start for next connection
telemetry_buffer.clear()
logger.info("[WebSocket] Telemetry buffer cleared on disconnect")
def generate_control_command(
lap_number: int,
strategy: Any,
enriched_telemetry: EnrichedTelemetryWebhook,
race_context: RaceContext
) -> Dict[str, Any]:
"""
Generate control commands for Pi based on strategy and telemetry.
Returns brake_bias and differential_slip values (0-10) with reasoning.
Logic:
- Brake bias: Adjust based on tire degradation (higher deg = more rear bias)
- Differential slip: Adjust based on pace trend and tire cliff risk
"""
# Default neutral values
brake_bias = 5
differential_slip = 5
reasoning_parts = []
# Adjust brake bias based on tire degradation
if enriched_telemetry.tire_degradation_rate > 0.7:
# High degradation: shift bias to rear (protect fronts)
brake_bias = 7
reasoning_parts.append(f"High tire degradation ({enriched_telemetry.tire_degradation_rate:.2f}) → Brake bias 7 (rear) to protect fronts")
elif enriched_telemetry.tire_degradation_rate > 0.4:
# Moderate degradation: slight rear bias
brake_bias = 6
reasoning_parts.append(f"Moderate tire degradation ({enriched_telemetry.tire_degradation_rate:.2f}) → Brake bias 6 (slight rear)")
elif enriched_telemetry.tire_degradation_rate < 0.2:
# Fresh tires: can use front bias for better turn-in
brake_bias = 4
reasoning_parts.append(f"Fresh tires ({enriched_telemetry.tire_degradation_rate:.2f}) → Brake bias 4 (front) for better turn-in")
else:
reasoning_parts.append(f"Normal tire degradation ({enriched_telemetry.tire_degradation_rate:.2f}) → Brake bias 5 (neutral)")
# Adjust differential slip based on pace and tire cliff risk
if enriched_telemetry.tire_cliff_risk > 0.7:
# High cliff risk: increase slip for gentler tire treatment
differential_slip = 7
reasoning_parts.append(f"High tire cliff risk ({enriched_telemetry.tire_cliff_risk:.2f}) → Diff slip 7 (gentle tire treatment)")
elif enriched_telemetry.pace_trend == "declining":
# Pace declining: moderate slip increase
differential_slip = 6
reasoning_parts.append(f"Pace declining → Diff slip 6 (preserve performance)")
elif enriched_telemetry.pace_trend == "improving":
# Pace improving: can be aggressive, lower slip
differential_slip = 4
reasoning_parts.append(f"Pace improving → Diff slip 4 (aggressive, lower slip)")
else:
reasoning_parts.append(f"Pace stable → Diff slip 5 (neutral)")
# Check if within pit window
pit_window = enriched_telemetry.optimal_pit_window
if pit_window and pit_window[0] <= lap_number <= pit_window[1]:
# In pit window: conservative settings to preserve tires
old_brake = brake_bias
old_diff = differential_slip
brake_bias = min(brake_bias + 1, 10)
differential_slip = min(differential_slip + 1, 10)
reasoning_parts.append(f"In pit window (laps {pit_window[0]}-{pit_window[1]}) → Conservative: brake {old_brake}{brake_bias}, diff {old_diff}{differential_slip}")
# Format reasoning for terminal output
reasoning_text = "\n".join(f"{part}" for part in reasoning_parts)
# Print reasoning to terminal
logger.info(f"CONTROL DECISION REASONING:")
logger.info(reasoning_text)
logger.info(f"FINAL COMMANDS: Brake Bias = {brake_bias}, Differential Slip = {differential_slip}")
# Also include strategy info if available
if strategy:
logger.info(f"TOP STRATEGY: {strategy.strategy_name}")
logger.info(f" Risk Level: {strategy.risk_level}")
logger.info(f" Description: {strategy.brief_description}")
return {
"brake_bias": brake_bias,
"differential_slip": differential_slip,
"reasoning": reasoning_text
}
if __name__ == "__main__": if __name__ == "__main__":
import uvicorn import uvicorn
settings = get_settings() settings = get_settings()
@@ -273,3 +549,4 @@ if __name__ == "__main__":
reload=True reload=True
) )

View File

@@ -7,14 +7,13 @@ from typing import List, Literal, Optional
class EnrichedTelemetryWebhook(BaseModel): class EnrichedTelemetryWebhook(BaseModel):
"""Single lap of enriched telemetry data from HPC enrichment module.""" """Single lap of enriched telemetry data from HPC enrichment module (lap-level)."""
lap: int = Field(..., description="Lap number") lap: int = Field(..., description="Lap number")
aero_efficiency: float = Field(..., ge=0.0, le=1.0, description="Aerodynamic efficiency (0..1, higher is better)") tire_degradation_rate: float = Field(..., ge=0.0, le=1.0, description="Tire degradation rate (0..1, higher is worse)")
tire_degradation_index: float = Field(..., ge=0.0, le=1.0, description="Tire wear (0..1, higher is worse)") pace_trend: Literal["improving", "stable", "declining"] = Field(..., description="Pace trend over recent laps")
ers_charge: float = Field(..., ge=0.0, le=1.0, description="Energy recovery system charge level") tire_cliff_risk: float = Field(..., ge=0.0, le=1.0, description="Probability of tire performance cliff (0..1)")
fuel_optimization_score: float = Field(..., ge=0.0, le=1.0, description="Fuel efficiency score") optimal_pit_window: List[int] = Field(..., description="Recommended pit stop lap window [start, end]")
driver_consistency: float = Field(..., ge=0.0, le=1.0, description="Lap-to-lap consistency") performance_delta: float = Field(..., description="Lap time delta vs baseline (negative = slower)")
weather_impact: Literal["low", "medium", "high"] = Field(..., description="Weather effect severity")
class RaceInfo(BaseModel): class RaceInfo(BaseModel):

View File

@@ -11,12 +11,11 @@ def build_brainstorm_prompt_fast(
enriched_telemetry: List[EnrichedTelemetryWebhook], enriched_telemetry: List[EnrichedTelemetryWebhook],
race_context: RaceContext race_context: RaceContext
) -> str: ) -> str:
"""Build a faster, more concise prompt for quicker responses.""" """Build a faster, more concise prompt for quicker responses (lap-level data)."""
settings = get_settings() settings = get_settings()
count = settings.strategy_count count = settings.strategy_count
latest = max(enriched_telemetry, key=lambda x: x.lap) latest = max(enriched_telemetry, key=lambda x: x.lap)
tire_rate = TelemetryAnalyzer.calculate_tire_degradation_rate(enriched_telemetry) pit_window = latest.optimal_pit_window
tire_cliff = TelemetryAnalyzer.project_tire_cliff(enriched_telemetry, race_context.race_info.current_lap)
if count == 1: if count == 1:
# Ultra-fast mode: just generate 1 strategy # Ultra-fast mode: just generate 1 strategy
@@ -24,7 +23,7 @@ def build_brainstorm_prompt_fast(
CURRENT: Lap {race_context.race_info.current_lap}/{race_context.race_info.total_laps}, P{race_context.driver_state.current_position}, {race_context.driver_state.current_tire_compound} tires ({race_context.driver_state.tire_age_laps} laps old) CURRENT: Lap {race_context.race_info.current_lap}/{race_context.race_info.total_laps}, P{race_context.driver_state.current_position}, {race_context.driver_state.current_tire_compound} tires ({race_context.driver_state.tire_age_laps} laps old)
TELEMETRY: Aero {latest.aero_efficiency:.2f}, Tire deg {latest.tire_degradation_index:.2f} (cliff lap {tire_cliff}), ERS {latest.ers_charge:.2f} TELEMETRY: Tire deg rate {latest.tire_degradation_rate:.2f}, Cliff risk {latest.tire_cliff_risk:.2f}, Pace {latest.pace_trend}, Pit window laps {pit_window[0]}-{pit_window[1]}
Generate 1 optimal strategy. Min 2 tire compounds required. Generate 1 optimal strategy. Min 2 tire compounds required.
@@ -36,17 +35,17 @@ JSON: {{"strategies": [{{"strategy_id": 1, "strategy_name": "name", "stop_count"
CURRENT: Lap {race_context.race_info.current_lap}/{race_context.race_info.total_laps}, P{race_context.driver_state.current_position}, {race_context.driver_state.current_tire_compound} tires ({race_context.driver_state.tire_age_laps} laps old) CURRENT: Lap {race_context.race_info.current_lap}/{race_context.race_info.total_laps}, P{race_context.driver_state.current_position}, {race_context.driver_state.current_tire_compound} tires ({race_context.driver_state.tire_age_laps} laps old)
TELEMETRY: Aero {latest.aero_efficiency:.2f}, Tire deg {latest.tire_degradation_index:.2f} (cliff lap {tire_cliff}), ERS {latest.ers_charge:.2f}, Fuel {latest.fuel_optimization_score:.2f} TELEMETRY: Tire deg {latest.tire_degradation_rate:.2f}, Cliff risk {latest.tire_cliff_risk:.2f}, Pace {latest.pace_trend}, Delta {latest.performance_delta:+.2f}s, Pit window {pit_window[0]}-{pit_window[1]}
Generate {count} strategies: conservative (1-stop), standard (1-2 stop), aggressive (undercut). Min 2 tire compounds each. Generate {count} strategies: conservative (1-stop), standard (1-2 stop), aggressive (undercut). Min 2 tire compounds each.
JSON: {{"strategies": [{{"strategy_id": 1, "strategy_name": "Conservative Stay Out", "stop_count": 1, "pit_laps": [35], "tire_sequence": ["medium", "hard"], "brief_description": "extend current stint then hard tires to end", "risk_level": "low", "key_assumption": "tire cliff at lap {tire_cliff}"}}]}}""" JSON: {{"strategies": [{{"strategy_id": 1, "strategy_name": "Conservative Stay Out", "stop_count": 1, "pit_laps": [35], "tire_sequence": ["medium", "hard"], "brief_description": "extend current stint then hard tires to end", "risk_level": "low", "key_assumption": "tire cliff risk stays below 0.7"}}]}}"""
return f"""Generate {count} F1 race strategies for {race_context.driver_state.driver_name} at {race_context.race_info.track_name}. return f"""Generate {count} F1 race strategies for {race_context.driver_state.driver_name} at {race_context.race_info.track_name}.
CURRENT: Lap {race_context.race_info.current_lap}/{race_context.race_info.total_laps}, P{race_context.driver_state.current_position}, {race_context.driver_state.current_tire_compound} tires ({race_context.driver_state.tire_age_laps} laps old) CURRENT: Lap {race_context.race_info.current_lap}/{race_context.race_info.total_laps}, P{race_context.driver_state.current_position}, {race_context.driver_state.current_tire_compound} tires ({race_context.driver_state.tire_age_laps} laps old)
TELEMETRY: Aero {latest.aero_efficiency:.2f}, Tire deg {latest.tire_degradation_index:.2f} (rate {tire_rate:.3f}/lap, cliff lap {tire_cliff}), ERS {latest.ers_charge:.2f}, Fuel {latest.fuel_optimization_score:.2f}, Consistency {latest.driver_consistency:.2f} TELEMETRY: Tire deg rate {latest.tire_degradation_rate:.2f}, Cliff risk {latest.tire_cliff_risk:.2f}, Pace {latest.pace_trend}, Performance delta {latest.performance_delta:+.2f}s, Pit window laps {pit_window[0]}-{pit_window[1]}
Generate {count} diverse strategies. Min 2 compounds. Generate {count} diverse strategies. Min 2 compounds.
@@ -67,27 +66,19 @@ def build_brainstorm_prompt(
Returns: Returns:
Formatted prompt string Formatted prompt string
""" """
# Generate telemetry summary # Get latest telemetry
telemetry_summary = TelemetryAnalyzer.generate_telemetry_summary(enriched_telemetry) latest = max(enriched_telemetry, key=lambda x: x.lap)
# Calculate key metrics # Format telemetry data (lap-level)
tire_rate = TelemetryAnalyzer.calculate_tire_degradation_rate(enriched_telemetry)
tire_cliff_lap = TelemetryAnalyzer.project_tire_cliff(
enriched_telemetry,
race_context.race_info.current_lap
)
# Format telemetry data
telemetry_data = [] telemetry_data = []
for t in sorted(enriched_telemetry, key=lambda x: x.lap, reverse=True)[:10]: for t in sorted(enriched_telemetry, key=lambda x: x.lap, reverse=True)[:10]:
telemetry_data.append({ telemetry_data.append({
"lap": t.lap, "lap": t.lap,
"aero_efficiency": round(t.aero_efficiency, 3), "tire_degradation_rate": round(t.tire_degradation_rate, 3),
"tire_degradation_index": round(t.tire_degradation_index, 3), "pace_trend": t.pace_trend,
"ers_charge": round(t.ers_charge, 3), "tire_cliff_risk": round(t.tire_cliff_risk, 3),
"fuel_optimization_score": round(t.fuel_optimization_score, 3), "optimal_pit_window": t.optimal_pit_window,
"driver_consistency": round(t.driver_consistency, 3), "performance_delta": round(t.performance_delta, 2)
"weather_impact": t.weather_impact
}) })
# Format competitors # Format competitors
@@ -101,15 +92,14 @@ def build_brainstorm_prompt(
"gap_seconds": round(c.gap_seconds, 1) "gap_seconds": round(c.gap_seconds, 1)
}) })
prompt = f"""You are an expert F1 strategist. Generate 20 diverse race strategies. prompt = f"""You are an expert F1 strategist. Generate 20 diverse race strategies based on lap-level telemetry.
TELEMETRY METRICS: LAP-LEVEL TELEMETRY METRICS:
- aero_efficiency: <0.6 problem, >0.8 optimal - tire_degradation_rate: 0-1 (higher = worse tire wear)
- tire_degradation_index: >0.7 degrading, >0.85 cliff - tire_cliff_risk: 0-1 (probability of hitting tire cliff)
- ers_charge: >0.7 attack, <0.3 depleted - pace_trend: "improving", "stable", or "declining"
- fuel_optimization_score: <0.7 save fuel - optimal_pit_window: [start_lap, end_lap] recommended pit range
- driver_consistency: <0.75 risky - performance_delta: seconds vs baseline (negative = slower)
- weather_impact: severity level
RACE STATE: RACE STATE:
Track: {race_context.race_info.track_name} Track: {race_context.race_info.track_name}
@@ -129,12 +119,11 @@ COMPETITORS:
ENRICHED TELEMETRY (Last {len(telemetry_data)} laps, newest first): ENRICHED TELEMETRY (Last {len(telemetry_data)} laps, newest first):
{telemetry_data} {telemetry_data}
TELEMETRY ANALYSIS:
{telemetry_summary}
KEY INSIGHTS: KEY INSIGHTS:
- Tire degradation rate: {tire_rate:.3f} per lap - Latest tire degradation rate: {latest.tire_degradation_rate:.3f}
- Projected tire cliff: Lap {tire_cliff_lap} - Latest tire cliff risk: {latest.tire_cliff_risk:.3f}
- Latest pace trend: {latest.pace_trend}
- Optimal pit window: Laps {latest.optimal_pit_window[0]}-{latest.optimal_pit_window[1]}
- Laps remaining: {race_context.race_info.total_laps - race_context.race_info.current_lap} - Laps remaining: {race_context.race_info.total_laps - race_context.race_info.current_lap}
TASK: Generate exactly 20 diverse strategies. TASK: Generate exactly 20 diverse strategies.
@@ -144,7 +133,7 @@ DIVERSITY: Conservative (1-stop), Standard (balanced), Aggressive (undercut), Re
RULES: RULES:
- Pit laps: {race_context.race_info.current_lap + 1} to {race_context.race_info.total_laps - 1} - Pit laps: {race_context.race_info.current_lap + 1} to {race_context.race_info.total_laps - 1}
- Min 2 tire compounds (F1 rule) - Min 2 tire compounds (F1 rule)
- Time pits before tire cliff (projected lap {tire_cliff_lap}) - Consider optimal pit window and tire cliff risk
For each strategy provide: For each strategy provide:
- strategy_id: 1-20 - strategy_id: 1-20

View File

@@ -40,17 +40,14 @@ class StrategyGenerator:
Raises: Raises:
Exception: If generation fails Exception: If generation fails
""" """
logger.info("Starting strategy brainstorming...") logger.info(f"Generating strategies using {len(enriched_telemetry)} laps of telemetry")
logger.info(f"Using {len(enriched_telemetry)} telemetry records")
# Build prompt (use fast mode if enabled) # Build prompt (use fast mode if enabled)
if self.settings.fast_mode: if self.settings.fast_mode:
from prompts.brainstorm_prompt import build_brainstorm_prompt_fast from prompts.brainstorm_prompt import build_brainstorm_prompt_fast
prompt = build_brainstorm_prompt_fast(enriched_telemetry, race_context) prompt = build_brainstorm_prompt_fast(enriched_telemetry, race_context)
logger.info("Using FAST MODE prompt")
else: else:
prompt = build_brainstorm_prompt(enriched_telemetry, race_context) prompt = build_brainstorm_prompt(enriched_telemetry, race_context)
logger.debug(f"Prompt length: {len(prompt)} chars")
# Generate with Gemini (high temperature for creativity) # Generate with Gemini (high temperature for creativity)
response_data = await self.gemini_client.generate_json( response_data = await self.gemini_client.generate_json(
@@ -64,7 +61,6 @@ class StrategyGenerator:
raise Exception("Response missing 'strategies' field") raise Exception("Response missing 'strategies' field")
strategies_data = response_data["strategies"] strategies_data = response_data["strategies"]
logger.info(f"Received {len(strategies_data)} strategies from Gemini")
# Validate and parse strategies # Validate and parse strategies
strategies = [] strategies = []
@@ -73,15 +69,12 @@ class StrategyGenerator:
strategy = Strategy(**s_data) strategy = Strategy(**s_data)
strategies.append(strategy) strategies.append(strategy)
except Exception as e: except Exception as e:
logger.warning(f"Failed to parse strategy {s_data.get('strategy_id', '?')}: {e}") logger.warning(f"Failed to parse strategy: {e}")
logger.info(f"Successfully parsed {len(strategies)} strategies") logger.info(f"Generated {len(strategies)} valid strategies")
# Validate strategies # Validate strategies
valid_strategies = StrategyValidator.validate_strategies(strategies, race_context) valid_strategies = StrategyValidator.validate_strategies(strategies, race_context)
if len(valid_strategies) < 10:
logger.warning(f"Only {len(valid_strategies)} valid strategies (expected 20)")
# Return response # Return response
return BrainstormResponse(strategies=valid_strategies) return BrainstormResponse(strategies=valid_strategies)

View File

@@ -0,0 +1,45 @@
#!/bin/bash
# Start AI Intelligence Layer
# Usage: ./start_ai_layer.sh
cd "$(dirname "$0")"
echo "Starting AI Intelligence Layer on port 9000..."
echo "Logs will be written to /tmp/ai_layer.log"
echo ""
# Kill any existing process on port 9000
PID=$(lsof -ti:9000)
if [ ! -z "$PID" ]; then
echo "Killing existing process on port 9000 (PID: $PID)"
kill -9 $PID 2>/dev/null
sleep 1
fi
# Start the AI layer
python3 main.py > /tmp/ai_layer.log 2>&1 &
AI_PID=$!
echo "AI Layer started with PID: $AI_PID"
echo ""
# Wait for startup
echo "Waiting for server to start..."
sleep 3
# Check if it's running
if lsof -Pi :9000 -sTCP:LISTEN -t >/dev/null ; then
echo "✓ AI Intelligence Layer is running on port 9000"
echo ""
echo "Health check:"
curl -s http://localhost:9000/api/health | python3 -m json.tool 2>/dev/null || echo " (waiting for full startup...)"
echo ""
echo "WebSocket endpoint: ws://localhost:9000/ws/pi"
echo ""
echo "To stop: kill $AI_PID"
echo "To view logs: tail -f /tmp/ai_layer.log"
else
echo "✗ Failed to start AI Intelligence Layer"
echo "Check logs: tail /tmp/ai_layer.log"
exit 1
fi

View File

@@ -80,133 +80,25 @@ class StrategyValidator:
class TelemetryAnalyzer: class TelemetryAnalyzer:
"""Analyzes enriched telemetry data to extract trends and insights.""" """Analyzes enriched lap-level telemetry data to extract trends and insights."""
@staticmethod @staticmethod
def calculate_tire_degradation_rate(telemetry: List[EnrichedTelemetryWebhook]) -> float: def calculate_tire_degradation_rate(telemetry: List[EnrichedTelemetryWebhook]) -> float:
""" """
Calculate tire degradation rate per lap. Calculate tire degradation rate per lap (using lap-level data).
Args: Args:
telemetry: List of enriched telemetry records telemetry: List of enriched telemetry records
Returns: Returns:
Rate of tire degradation per lap (0.0 to 1.0) Latest tire degradation rate (0.0 to 1.0)
"""
if len(telemetry) < 2:
return 0.0
# Sort by lap (ascending)
sorted_telemetry = sorted(telemetry, key=lambda x: x.lap)
# Calculate rate of change
first = sorted_telemetry[0]
last = sorted_telemetry[-1]
lap_diff = last.lap - first.lap
if lap_diff == 0:
return 0.0
deg_diff = last.tire_degradation_index - first.tire_degradation_index
rate = deg_diff / lap_diff
return max(0.0, rate) # Ensure non-negative
@staticmethod
def calculate_aero_efficiency_avg(telemetry: List[EnrichedTelemetryWebhook]) -> float:
"""
Calculate average aero efficiency.
Args:
telemetry: List of enriched telemetry records
Returns:
Average aero efficiency (0.0 to 1.0)
""" """
if not telemetry: if not telemetry:
return 0.0 return 0.0
total = sum(t.aero_efficiency for t in telemetry) # Use latest tire degradation rate from enrichment
return total / len(telemetry)
@staticmethod
def analyze_ers_pattern(telemetry: List[EnrichedTelemetryWebhook]) -> str:
"""
Analyze ERS charge pattern.
Args:
telemetry: List of enriched telemetry records
Returns:
Pattern description: "charging", "stable", "depleting"
"""
if len(telemetry) < 2:
return "stable"
# Sort by lap
sorted_telemetry = sorted(telemetry, key=lambda x: x.lap)
# Look at recent trend
recent = sorted_telemetry[-3:] if len(sorted_telemetry) >= 3 else sorted_telemetry
if len(recent) < 2:
return "stable"
# Calculate average change
total_change = 0.0
for i in range(1, len(recent)):
total_change += recent[i].ers_charge - recent[i-1].ers_charge
avg_change = total_change / (len(recent) - 1)
if avg_change > 0.05:
return "charging"
elif avg_change < -0.05:
return "depleting"
else:
return "stable"
@staticmethod
def is_fuel_critical(telemetry: List[EnrichedTelemetryWebhook]) -> bool:
"""
Check if fuel situation is critical.
Args:
telemetry: List of enriched telemetry records
Returns:
True if fuel optimization score is below 0.7
"""
if not telemetry:
return False
# Check most recent telemetry
latest = max(telemetry, key=lambda x: x.lap) latest = max(telemetry, key=lambda x: x.lap)
return latest.fuel_optimization_score < 0.7 return latest.tire_degradation_rate
@staticmethod
def assess_driver_form(telemetry: List[EnrichedTelemetryWebhook]) -> str:
"""
Assess driver consistency form.
Args:
telemetry: List of enriched telemetry records
Returns:
Form description: "excellent", "good", "inconsistent"
"""
if not telemetry:
return "good"
# Get average consistency
avg_consistency = sum(t.driver_consistency for t in telemetry) / len(telemetry)
if avg_consistency >= 0.85:
return "excellent"
elif avg_consistency >= 0.75:
return "good"
else:
return "inconsistent"
@staticmethod @staticmethod
def project_tire_cliff( def project_tire_cliff(
@@ -214,65 +106,27 @@ class TelemetryAnalyzer:
current_lap: int current_lap: int
) -> int: ) -> int:
""" """
Project when tire degradation will hit 0.85 (performance cliff). Project when tire cliff will be reached (using lap-level data).
Args: Args:
telemetry: List of enriched telemetry records telemetry: List of enriched telemetry records
current_lap: Current lap number current_lap: Current lap number
Returns: Returns:
Projected lap number when cliff will be reached Estimated lap number when cliff will be reached
""" """
if not telemetry: if not telemetry:
return current_lap + 20 # Default assumption return current_lap + 20 # Default assumption
# Get current degradation and rate # Use tire cliff risk from enrichment
latest = max(telemetry, key=lambda x: x.lap) latest = max(telemetry, key=lambda x: x.lap)
current_deg = latest.tire_degradation_index cliff_risk = latest.tire_cliff_risk
if current_deg >= 0.85: if cliff_risk >= 0.7:
return current_lap # Already at cliff return current_lap + 2 # Imminent cliff
elif cliff_risk >= 0.4:
# Calculate rate return current_lap + 5 # Approaching cliff
rate = TelemetryAnalyzer.calculate_tire_degradation_rate(telemetry) else:
# Estimate based on optimal pit window
if rate <= 0: pit_window = latest.optimal_pit_window
return current_lap + 50 # Not degrading, far future return pit_window[1] if pit_window else current_lap + 15
# Project laps until 0.85
laps_until_cliff = (0.85 - current_deg) / rate
projected_lap = current_lap + int(laps_until_cliff)
return projected_lap
@staticmethod
def generate_telemetry_summary(telemetry: List[EnrichedTelemetryWebhook]) -> str:
"""
Generate human-readable summary of telemetry trends.
Args:
telemetry: List of enriched telemetry records
Returns:
Summary string
"""
if not telemetry:
return "No telemetry data available."
tire_rate = TelemetryAnalyzer.calculate_tire_degradation_rate(telemetry)
aero_avg = TelemetryAnalyzer.calculate_aero_efficiency_avg(telemetry)
ers_pattern = TelemetryAnalyzer.analyze_ers_pattern(telemetry)
fuel_critical = TelemetryAnalyzer.is_fuel_critical(telemetry)
driver_form = TelemetryAnalyzer.assess_driver_form(telemetry)
latest = max(telemetry, key=lambda x: x.lap)
summary = f"""Telemetry Analysis (Last {len(telemetry)} laps):
- Tire degradation: {latest.tire_degradation_index:.2f} index, increasing at {tire_rate:.3f}/lap
- Aero efficiency: {aero_avg:.2f} average
- ERS: {latest.ers_charge:.2f} charge, {ers_pattern}
- Fuel: {latest.fuel_optimization_score:.2f} score, {'CRITICAL' if fuel_critical else 'OK'}
- Driver form: {driver_form} ({latest.driver_consistency:.2f} consistency)
- Weather impact: {latest.weather_impact}"""
return summary

View File

@@ -4,147 +4,105 @@ from typing import Dict, Any
def normalize_telemetry(payload: Dict[str, Any]) -> Dict[str, Any]: def normalize_telemetry(payload: Dict[str, Any]) -> Dict[str, Any]:
"""Normalize Pi/FastF1-like telemetry payload to Enricher expected schema. """Normalize lap-level telemetry payload from Pi stream to Enricher schema.
Accepted aliases: Accepted aliases for lap-level data:
- speed: Speed - lap_number: lap, Lap, LapNumber, lap_number
- throttle: Throttle
- brake: Brake, Brakes
- tire_compound: Compound, TyreCompound, Tire
- fuel_level: Fuel, FuelRel, FuelLevel
- ers: ERS, ERSCharge
- track_temp: TrackTemp, track_temperature
- rain_probability: RainProb, PrecipProb
- lap: Lap, LapNumber, lap_number
- total_laps: TotalLaps, total_laps - total_laps: TotalLaps, total_laps
- track_name: TrackName, track_name, Circuit - lap_time: lap_time, LapTime, Time
- driver_name: DriverName, driver_name, Driver - average_speed: average_speed, avg_speed, AvgSpeed
- current_position: Position, current_position - max_speed: max_speed, MaxSpeed, max
- tire_life_laps: TireAge, tire_age, tire_life_laps - tire_compound: tire_compound, Compound, TyreCompound, Tire
- rainfall: Rainfall, rainfall, Rain - tire_life_laps: tire_life_laps, TireAge, tire_age
- track_temperature: track_temperature, TrackTemp, track_temp
- rainfall: rainfall, Rainfall, Rain
Values are clamped and defaulted if missing. Returns normalized dict ready for enrichment layer.
""" """
aliases = { aliases = {
"lap": ["lap", "Lap", "LapNumber", "lap_number"], "lap_number": ["lap_number", "lap", "Lap", "LapNumber"],
"speed": ["speed", "Speed"],
"throttle": ["throttle", "Throttle"],
"brake": ["brake", "Brake", "Brakes"],
"tire_compound": ["tire_compound", "Compound", "TyreCompound", "Tire"],
"fuel_level": ["fuel_level", "Fuel", "FuelRel", "FuelLevel"],
"ers": ["ers", "ERS", "ERSCharge"],
"track_temp": ["track_temp", "TrackTemp", "track_temperature"],
"rain_probability": ["rain_probability", "RainProb", "PrecipProb"],
"total_laps": ["total_laps", "TotalLaps"], "total_laps": ["total_laps", "TotalLaps"],
"track_name": ["track_name", "TrackName", "Circuit"], "lap_time": ["lap_time", "LapTime", "Time"],
"driver_name": ["driver_name", "DriverName", "Driver"], "average_speed": ["average_speed", "avg_speed", "AvgSpeed"],
"current_position": ["current_position", "Position"], "max_speed": ["max_speed", "MaxSpeed", "max"],
"tire_compound": ["tire_compound", "Compound", "TyreCompound", "Tire"],
"tire_life_laps": ["tire_life_laps", "TireAge", "tire_age"], "tire_life_laps": ["tire_life_laps", "TireAge", "tire_age"],
"track_temperature": ["track_temperature", "TrackTemp", "track_temp"],
"rainfall": ["rainfall", "Rainfall", "Rain"], "rainfall": ["rainfall", "Rainfall", "Rain"],
} }
out: Dict[str, Any] = {} out: Dict[str, Any] = {}
def pick(key: str, default=None): def pick(key: str, default=None):
"""Pick first matching alias from payload."""
for k in aliases.get(key, [key]): for k in aliases.get(key, [key]):
if k in payload and payload[k] is not None: if k in payload and payload[k] is not None:
return payload[k] return payload[k]
return default return default
def clamp01(x, default=0.0): # Extract and validate lap-level fields
try: lap_number = pick("lap_number", 0)
v = float(x)
except (TypeError, ValueError):
return default
return max(0.0, min(1.0, v))
# Map values with sensible defaults
lap = pick("lap", 0)
try: try:
lap = int(lap) lap_number = int(lap_number)
except (TypeError, ValueError): except (TypeError, ValueError):
lap = 0 lap_number = 0
speed = pick("speed", 0.0) total_laps = pick("total_laps", 51)
try: try:
speed = float(speed) total_laps = int(total_laps)
except (TypeError, ValueError): except (TypeError, ValueError):
speed = 0.0 total_laps = 51
throttle = clamp01(pick("throttle", 0.0), 0.0) lap_time = pick("lap_time", None)
brake = clamp01(pick("brake", 0.0), 0.0) if lap_time:
out["lap_time"] = str(lap_time)
average_speed = pick("average_speed", 0.0)
try:
average_speed = float(average_speed)
except (TypeError, ValueError):
average_speed = 0.0
max_speed = pick("max_speed", 0.0)
try:
max_speed = float(max_speed)
except (TypeError, ValueError):
max_speed = 0.0
tire_compound = pick("tire_compound", "medium") tire_compound = pick("tire_compound", "medium")
if isinstance(tire_compound, str): if isinstance(tire_compound, str):
tire_compound = tire_compound.lower() tire_compound = tire_compound.upper() # Keep uppercase for consistency
else: else:
tire_compound = "medium" tire_compound = "MEDIUM"
fuel_level = clamp01(pick("fuel_level", 0.5), 0.5) tire_life_laps = pick("tire_life_laps", 0)
ers = pick("ers", None)
if ers is not None:
ers = clamp01(ers, None)
track_temp = pick("track_temp", None)
try: try:
track_temp = float(track_temp) if track_temp is not None else None tire_life_laps = int(tire_life_laps)
except (TypeError, ValueError): except (TypeError, ValueError):
track_temp = None tire_life_laps = 0
rain_prob = pick("rain_probability", None) track_temperature = pick("track_temperature", 25.0)
try: try:
rain_prob = clamp01(rain_prob, None) if rain_prob is not None else None track_temperature = float(track_temperature)
except Exception: except (TypeError, ValueError):
rain_prob = None track_temperature = 25.0
rainfall = pick("rainfall", False)
try:
rainfall = bool(rainfall)
except (TypeError, ValueError):
rainfall = False
# Build normalized output
out.update({ out.update({
"lap": lap, "lap_number": lap_number,
"speed": speed, "total_laps": total_laps,
"throttle": throttle, "average_speed": average_speed,
"brake": brake, "max_speed": max_speed,
"tire_compound": tire_compound, "tire_compound": tire_compound,
"fuel_level": fuel_level, "tire_life_laps": tire_life_laps,
"track_temperature": track_temperature,
"rainfall": rainfall,
}) })
if ers is not None:
out["ers"] = ers
if track_temp is not None:
out["track_temp"] = track_temp
if rain_prob is not None:
out["rain_probability"] = rain_prob
# Add race context fields if present
total_laps = pick("total_laps", None)
if total_laps is not None:
try:
out["total_laps"] = int(total_laps)
except (TypeError, ValueError):
pass
track_name = pick("track_name", None)
if track_name:
out["track_name"] = str(track_name)
driver_name = pick("driver_name", None)
if driver_name:
out["driver_name"] = str(driver_name)
current_position = pick("current_position", None)
if current_position is not None:
try:
out["current_position"] = int(current_position)
except (TypeError, ValueError):
pass
tire_life_laps = pick("tire_life_laps", None)
if tire_life_laps is not None:
try:
out["tire_life_laps"] = int(tire_life_laps)
except (TypeError, ValueError):
pass
rainfall = pick("rainfall", None)
if rainfall is not None:
out["rainfall"] = bool(rainfall)
return out return out

View File

@@ -25,24 +25,24 @@ _CALLBACK_URL = os.getenv("NEXT_STAGE_CALLBACK_URL")
class EnrichedRecord(BaseModel): class EnrichedRecord(BaseModel):
"""Lap-level enriched telemetry model."""
lap: int lap: int
aero_efficiency: float tire_degradation_rate: float
tire_degradation_index: float pace_trend: str
ers_charge: float tire_cliff_risk: float
fuel_optimization_score: float optimal_pit_window: List[int]
driver_consistency: float performance_delta: float
weather_impact: str
@app.post("/ingest/telemetry") @app.post("/ingest/telemetry")
async def ingest_telemetry(payload: Dict[str, Any] = Body(...)): async def ingest_telemetry(payload: Dict[str, Any] = Body(...)):
"""Receive raw telemetry (from Pi), normalize, enrich, return enriched with race context. """Receive raw lap-level telemetry (from Pi), normalize, enrich, return enriched with race context.
Optionally forwards to NEXT_STAGE_CALLBACK_URL if set. Optionally forwards to NEXT_STAGE_CALLBACK_URL if set.
""" """
try: try:
normalized = normalize_telemetry(payload) normalized = normalize_telemetry(payload)
result = _enricher.enrich_with_context(normalized) result = _enricher.enrich_lap_data(normalized)
enriched = result["enriched_telemetry"] enriched = result["enriched_telemetry"]
race_context = result["race_context"] race_context = result["race_context"]
except Exception as e: except Exception as e:
@@ -85,3 +85,12 @@ async def list_enriched(limit: int = 50):
@app.get("/healthz") @app.get("/healthz")
async def healthz(): async def healthz():
return {"status": "ok", "stored": len(_recent)} return {"status": "ok", "stored": len(_recent)}
@app.post("/reset")
async def reset_enricher():
"""Reset enricher state for a new session/race."""
global _enricher
_enricher = Enricher()
_recent.clear()
return {"status": "reset", "message": "Enricher state and buffer cleared"}

View File

@@ -2,370 +2,254 @@ from __future__ import annotations
from dataclasses import dataclass, field from dataclasses import dataclass, field
from typing import Dict, Any, Optional, List from typing import Dict, Any, Optional, List
import math import pandas as pd
# --- Contracts --- # --- LAP-LEVEL TELEMETRY CONTRACT ---
# Input telemetry (example, extensible): # Input from Raspberry Pi (lap-level data):
# { # {
# "lap": 27, # "lap_number": 27,
# "speed": 282, # km/h # "total_laps": 51,
# "throttle": 0.91, # 0..1 # "lap_time": "0 days 00:01:27.318000",
# "brake": 0.05, # 0..1 # "average_speed": 234.62,
# "tire_compound": "medium",# soft|medium|hard|inter|wet # "max_speed": 333.0,
# "fuel_level": 0.47, # 0..1 (fraction of race fuel) # "tire_compound": "MEDIUM",
# "ers": 0.72, # optional 0..1 # "tire_life_laps": 19,
# "track_temp": 38, # optional Celsius # "track_temperature": 43.6,
# "rain_probability": 0.2 # optional 0..1 # "rainfall": false
#
# # Additional fields for race context:
# "track_name": "Monza", # optional
# "total_laps": 51, # optional
# "driver_name": "Alonso", # optional
# "current_position": 5, # optional
# "tire_life_laps": 12, # optional (tire age)
# "rainfall": False # optional (boolean)
# }
#
# Output enrichment + race context:
# {
# "enriched_telemetry": {
# "lap": 27,
# "aero_efficiency": 0.83,
# "tire_degradation_index": 0.65,
# "ers_charge": 0.72,
# "fuel_optimization_score": 0.91,
# "driver_consistency": 0.89,
# "weather_impact": "low|medium|high"
# },
# "race_context": {
# "race_info": {...},
# "driver_state": {...},
# "competitors": [...]
# }
# } # }
_TIRES_BASE_WEAR = { _TIRE_DEGRADATION_RATES = {
"soft": 0.012, "soft": 0.030, # Fast degradation
"medium": 0.008, "medium": 0.020, # Moderate degradation
"hard": 0.006, "hard": 0.015, # Slow degradation
"inter": 0.015, "inter": 0.025,
"wet": 0.02, "wet": 0.022,
} }
_TIRE_CLIFF_THRESHOLD = 25 # Laps before cliff risk increases significantly
@dataclass @dataclass
class EnricherState: class EnricherState:
last_lap: Optional[int] = None """Maintains race state across laps for trend analysis."""
lap_speeds: Dict[int, float] = field(default_factory=dict) lap_times: List[float] = field(default_factory=list) # Recent lap times in seconds
lap_throttle_avg: Dict[int, float] = field(default_factory=dict) lap_speeds: List[float] = field(default_factory=list) # Recent average speeds
cumulative_wear: float = 0.0 # 0..1 approx current_tire_age: int = 0
current_tire_compound: str = "medium"
# Race context state tire_stint_start_lap: int = 1
track_name: str = "Unknown Circuit" total_laps: int = 51
total_laps: int = 50 track_name: str = "Monza"
driver_name: str = "Driver"
current_position: int = 10
tire_compound_history: List[str] = field(default_factory=list)
class Enricher: class Enricher:
"""Heuristic enrichment engine to simulate HPC analytics on telemetry. """
HPC-simulated enrichment for lap-level F1 telemetry.
Stateless inputs are enriched with stateful estimates (wear, consistency, etc.). Accepts lap-level data from Raspberry Pi and generates performance insights
Designed for predictable, dependency-free behavior. that simulate HPC computational analysis.
""" """
def __init__(self): def __init__(self):
self.state = EnricherState() self.state = EnricherState()
self._baseline_lap_time: Optional[float] = None # Best lap time as baseline
# --- Public API --- def enrich_lap_data(self, lap_data: Dict[str, Any]) -> Dict[str, Any]:
def enrich(self, telemetry: Dict[str, Any]) -> Dict[str, Any]: """
"""Legacy method - returns only enriched telemetry metrics.""" Main enrichment method for lap-level data.
lap = int(telemetry.get("lap", 0)) Returns enriched telemetry + race context for AI layer.
speed = float(telemetry.get("speed", 0.0)) """
throttle = float(telemetry.get("throttle", 0.0)) # Extract lap data
brake = float(telemetry.get("brake", 0.0)) lap_number = int(lap_data.get("lap_number", 0))
tire_compound = str(telemetry.get("tire_compound", "medium")).lower() total_laps = int(lap_data.get("total_laps", 51))
fuel_level = float(telemetry.get("fuel_level", 0.5)) lap_time_str = lap_data.get("lap_time")
ers = telemetry.get("ers") average_speed = float(lap_data.get("average_speed", 0.0))
track_temp = telemetry.get("track_temp") max_speed = float(lap_data.get("max_speed", 0.0))
rain_prob = telemetry.get("rain_probability") tire_compound = str(lap_data.get("tire_compound", "MEDIUM")).lower()
tire_life_laps = int(lap_data.get("tire_life_laps", 0))
track_temperature = float(lap_data.get("track_temperature", 25.0))
rainfall = bool(lap_data.get("rainfall", False))
# Update per-lap aggregates # Convert lap time to seconds
self._update_lap_stats(lap, speed, throttle) lap_time_seconds = self._parse_lap_time(lap_time_str)
# Metrics # Update state
aero_eff = self._compute_aero_efficiency(speed, throttle, brake) self.state.lap_times.append(lap_time_seconds)
tire_deg = self._compute_tire_degradation(lap, speed, throttle, tire_compound, track_temp) self.state.lap_speeds.append(average_speed)
ers_charge = self._compute_ers_charge(ers, throttle, brake) self.state.current_tire_age = tire_life_laps
fuel_opt = self._compute_fuel_optimization(fuel_level, throttle) self.state.current_tire_compound = tire_compound
consistency = self._compute_driver_consistency() self.state.total_laps = total_laps
weather_impact = self._compute_weather_impact(rain_prob, track_temp)
return { # Keep only last 10 laps for analysis
"lap": lap, if len(self.state.lap_times) > 10:
"aero_efficiency": round(aero_eff, 3), self.state.lap_times = self.state.lap_times[-10:]
"tire_degradation_index": round(tire_deg, 3), self.state.lap_speeds = self.state.lap_speeds[-10:]
"ers_charge": round(ers_charge, 3),
"fuel_optimization_score": round(fuel_opt, 3),
"driver_consistency": round(consistency, 3),
"weather_impact": weather_impact,
}
def enrich_with_context(self, telemetry: Dict[str, Any]) -> Dict[str, Any]: # Set baseline (best lap time)
"""Enrich telemetry and build complete race context for AI layer.""" if self._baseline_lap_time is None or lap_time_seconds < self._baseline_lap_time:
# Extract all fields self._baseline_lap_time = lap_time_seconds
lap = int(telemetry.get("lap", telemetry.get("lap_number", 0)))
speed = float(telemetry.get("speed", 0.0))
throttle = float(telemetry.get("throttle", 0.0))
brake = float(telemetry.get("brake", 0.0))
tire_compound = str(telemetry.get("tire_compound", "medium")).lower()
fuel_level = float(telemetry.get("fuel_level", 0.5))
ers = telemetry.get("ers")
track_temp = telemetry.get("track_temp", telemetry.get("track_temperature"))
rain_prob = telemetry.get("rain_probability")
rainfall = telemetry.get("rainfall", False)
# Race context fields # Compute HPC-simulated insights
track_name = telemetry.get("track_name", self.state.track_name) tire_deg_rate = self._compute_tire_degradation_rate(tire_compound, tire_life_laps, track_temperature)
total_laps = int(telemetry.get("total_laps", self.state.total_laps)) pace_trend = self._compute_pace_trend()
driver_name = telemetry.get("driver_name", self.state.driver_name) tire_cliff_risk = self._compute_tire_cliff_risk(tire_compound, tire_life_laps)
current_position = int(telemetry.get("current_position", self.state.current_position)) pit_window = self._compute_optimal_pit_window(lap_number, total_laps, tire_life_laps, tire_compound)
tire_life_laps = int(telemetry.get("tire_life_laps", 0)) performance_delta = self._compute_performance_delta(lap_time_seconds)
# Update state with race context
if track_name:
self.state.track_name = track_name
if total_laps:
self.state.total_laps = total_laps
if driver_name:
self.state.driver_name = driver_name
if current_position:
self.state.current_position = current_position
# Track tire compound changes
if tire_compound and (not self.state.tire_compound_history or
self.state.tire_compound_history[-1] != tire_compound):
self.state.tire_compound_history.append(tire_compound)
# Update per-lap aggregates
self._update_lap_stats(lap, speed, throttle)
# Compute enriched metrics
aero_eff = self._compute_aero_efficiency(speed, throttle, brake)
tire_deg = self._compute_tire_degradation(lap, speed, throttle, tire_compound, track_temp)
ers_charge = self._compute_ers_charge(ers, throttle, brake)
fuel_opt = self._compute_fuel_optimization(fuel_level, throttle)
consistency = self._compute_driver_consistency()
weather_impact = self._compute_weather_impact(rain_prob, track_temp)
# Build enriched telemetry # Build enriched telemetry
enriched_telemetry = { enriched_telemetry = {
"lap": lap, "lap": lap_number,
"aero_efficiency": round(aero_eff, 3), "tire_degradation_rate": round(tire_deg_rate, 3),
"tire_degradation_index": round(tire_deg, 3), "pace_trend": pace_trend,
"ers_charge": round(ers_charge, 3), "tire_cliff_risk": round(tire_cliff_risk, 3),
"fuel_optimization_score": round(fuel_opt, 3), "optimal_pit_window": pit_window,
"driver_consistency": round(consistency, 3), "performance_delta": round(performance_delta, 2)
"weather_impact": weather_impact,
} }
# Build race context # Build race context
race_context = self._build_race_context( race_context = {
lap=lap, "race_info": {
total_laps=total_laps, "track_name": self.state.track_name,
track_name=track_name, "total_laps": total_laps,
track_temp=track_temp, "current_lap": lap_number,
rainfall=rainfall, "weather_condition": "Wet" if rainfall else "Dry",
driver_name=driver_name, "track_temp_celsius": track_temperature
current_position=current_position, },
tire_compound=tire_compound, "driver_state": {
tire_life_laps=tire_life_laps, "driver_name": "Alonso",
fuel_level=fuel_level "current_position": 5, # Mock - could be passed in
) "current_tire_compound": tire_compound,
"tire_age_laps": tire_life_laps,
"fuel_remaining_percent": self._estimate_fuel(lap_number, total_laps)
}
}
return { return {
"enriched_telemetry": enriched_telemetry, "enriched_telemetry": enriched_telemetry,
"race_context": race_context "race_context": race_context
} }
def _build_race_context( # --- HPC-Simulated Computation Methods ---
self,
lap: int,
total_laps: int,
track_name: str,
track_temp: Optional[float],
rainfall: bool,
driver_name: str,
current_position: int,
tire_compound: str,
tire_life_laps: int,
fuel_level: float
) -> Dict[str, Any]:
"""Build complete race context structure for AI layer."""
# Normalize tire compound for output def _compute_tire_degradation_rate(self, tire_compound: str, tire_age: int, track_temp: float) -> float:
tire_map = { """
"soft": "soft", Simulate HPC computation of tire degradation rate.
"medium": "medium", Returns 0-1 value (higher = worse degradation).
"hard": "hard", """
"inter": "intermediate", base_rate = _TIRE_DEGRADATION_RATES.get(tire_compound, 0.020)
"intermediate": "intermediate",
"wet": "wet"
}
normalized_tire = tire_map.get(tire_compound.lower(), "medium")
# Determine weather condition # Temperature effect: higher temp = more degradation
if rainfall: temp_multiplier = 1.0
weather_condition = "Wet" if track_temp > 45:
temp_multiplier = 1.3
elif track_temp > 40:
temp_multiplier = 1.15
elif track_temp < 20:
temp_multiplier = 0.9
# Age effect: exponential increase after certain threshold
age_multiplier = 1.0
if tire_age > 20:
age_multiplier = 1.0 + ((tire_age - 20) * 0.05) # +5% per lap over 20
degradation = base_rate * tire_age * temp_multiplier * age_multiplier
return min(1.0, degradation)
def _compute_pace_trend(self) -> str:
"""
Analyze recent lap times to determine pace trend.
Returns: "improving", "stable", or "declining"
"""
if len(self.state.lap_times) < 3:
return "stable"
recent_laps = self.state.lap_times[-5:] # Last 5 laps
# Calculate trend (simple linear regression)
avg_first_half = sum(recent_laps[:len(recent_laps)//2]) / max(1, len(recent_laps)//2)
avg_second_half = sum(recent_laps[len(recent_laps)//2:]) / max(1, len(recent_laps) - len(recent_laps)//2)
diff = avg_second_half - avg_first_half
if diff < -0.5: # Getting faster by more than 0.5s
return "improving"
elif diff > 0.5: # Getting slower by more than 0.5s
return "declining"
else: else:
weather_condition = "Dry" return "stable"
race_context = { def _compute_tire_cliff_risk(self, tire_compound: str, tire_age: int) -> float:
"race_info": { """
"track_name": track_name, Compute probability of hitting tire performance cliff.
"total_laps": total_laps, Returns 0-1 (0 = no risk, 1 = imminent cliff).
"current_lap": lap, """
"weather_condition": weather_condition, # Different compounds have different cliff points
"track_temp_celsius": float(track_temp) if track_temp is not None else 25.0 cliff_points = {
}, "soft": 15,
"driver_state": { "medium": 25,
"driver_name": driver_name, "hard": 35,
"current_position": current_position, "inter": 20,
"current_tire_compound": normalized_tire, "wet": 18
"tire_age_laps": tire_life_laps,
"fuel_remaining_percent": fuel_level * 100.0 # Convert 0..1 to 0..100
},
"competitors": self._generate_mock_competitors(current_position, normalized_tire, tire_life_laps)
} }
return race_context cliff_point = cliff_points.get(tire_compound, 25)
def _generate_mock_competitors( if tire_age < cliff_point - 5:
self, return 0.0
current_position: int, elif tire_age >= cliff_point + 5:
current_tire: str, return 1.0
current_tire_age: int
) -> List[Dict[str, Any]]:
"""Generate realistic mock competitor data for race context."""
competitors = []
# Driver names pool
driver_names = [
"Verstappen", "Hamilton", "Leclerc", "Perez", "Sainz",
"Russell", "Norris", "Piastri", "Alonso", "Stroll",
"Gasly", "Ocon", "Tsunoda", "Ricciardo", "Bottas",
"Zhou", "Magnussen", "Hulkenberg", "Albon", "Sargeant"
]
tire_compounds = ["soft", "medium", "hard"]
# Generate positions around the current driver (±3 positions)
positions_to_show = []
for offset in [-3, -2, -1, 1, 2, 3]:
pos = current_position + offset
if 1 <= pos <= 20 and pos != current_position:
positions_to_show.append(pos)
for pos in sorted(positions_to_show):
# Calculate gap (negative if ahead, positive if behind)
gap_base = (pos - current_position) * 2.5 # ~2.5s per position
gap_variation = (hash(str(pos)) % 100) / 50.0 - 1.0 # -1 to +1 variation
gap = gap_base + gap_variation
# Choose tire compound (bias towards similar strategy)
tire_choice = current_tire
if abs(hash(str(pos)) % 3) == 0: # 33% different strategy
tire_choice = tire_compounds[pos % 3]
# Tire age variation
tire_age = max(0, current_tire_age + (hash(str(pos * 7)) % 11) - 5)
competitor = {
"position": pos,
"driver": driver_names[(pos - 1) % len(driver_names)],
"tire_compound": tire_choice,
"tire_age_laps": tire_age,
"gap_seconds": round(gap, 2)
}
competitors.append(competitor)
return competitors
# --- Internals ---
def _update_lap_stats(self, lap: int, speed: float, throttle: float) -> None:
if lap <= 0:
return
# Store simple aggregates for consistency metrics
self.state.lap_speeds[lap] = speed
self.state.lap_throttle_avg[lap] = 0.8 * self.state.lap_throttle_avg.get(lap, throttle) + 0.2 * throttle
self.state.last_lap = lap
def _compute_aero_efficiency(self, speed: float, throttle: float, brake: float) -> float:
# Heuristic: favor high speed with low throttle variance (efficiency) and minimal braking at high speeds
# Normalize speed into 0..1 assuming 0..330 km/h typical
speed_n = max(0.0, min(1.0, speed / 330.0))
brake_penalty = 0.4 * brake
throttle_bonus = 0.2 * throttle
base = 0.5 * speed_n + throttle_bonus - brake_penalty
return max(0.0, min(1.0, base))
def _compute_tire_degradation(self, lap: int, speed: float, throttle: float, tire_compound: str, track_temp: Optional[float]) -> float:
base_wear = _TIRES_BASE_WEAR.get(tire_compound, _TIRES_BASE_WEAR["medium"]) # per lap
temp_factor = 1.0
if isinstance(track_temp, (int, float)):
if track_temp > 42:
temp_factor = 1.25
elif track_temp < 15:
temp_factor = 0.9
stress = 0.5 + 0.5 * throttle + 0.2 * max(0.0, (speed - 250.0) / 100.0)
wear_this_lap = base_wear * stress * temp_factor
# Update cumulative wear but cap at 1.0
self.state.cumulative_wear = min(1.0, self.state.cumulative_wear + wear_this_lap)
return self.state.cumulative_wear
def _compute_ers_charge(self, ers: Optional[float], throttle: float, brake: float) -> float:
if isinstance(ers, (int, float)):
# simple recovery under braking, depletion under throttle
ers_level = float(ers) + 0.1 * brake - 0.05 * throttle
else: else:
# infer ers trend if not provided # Linear risk increase in 10-lap window around cliff point
ers_level = 0.6 + 0.05 * brake - 0.03 * throttle return (tire_age - (cliff_point - 5)) / 10.0
return max(0.0, min(1.0, ers_level))
def _compute_fuel_optimization(self, fuel_level: float, throttle: float) -> float: def _compute_optimal_pit_window(self, current_lap: int, total_laps: int, tire_age: int, tire_compound: str) -> List[int]:
# Reward keeping throttle moderate when fuel is low and pushing when fuel is high """
fuel_n = max(0.0, min(1.0, fuel_level)) Calculate optimal pit stop window based on tire degradation.
ideal_throttle = 0.5 + 0.4 * fuel_n # higher fuel -> higher ideal throttle Returns [start_lap, end_lap] for pit window.
penalty = abs(throttle - ideal_throttle) """
score = 1.0 - penalty cliff_risk = self._compute_tire_cliff_risk(tire_compound, tire_age)
return max(0.0, min(1.0, score))
def _compute_driver_consistency(self) -> float: if cliff_risk > 0.7:
# Use last up to 5 laps speed variance to estimate consistency (lower variance -> higher consistency) # Urgent pit needed
laps = sorted(self.state.lap_speeds.keys())[-5:] return [current_lap + 1, current_lap + 3]
if not laps: elif cliff_risk > 0.4:
return 0.5 # Pit soon
speeds = [self.state.lap_speeds[l] for l in laps] return [current_lap + 3, current_lap + 6]
mean = sum(speeds) / len(speeds) else:
var = sum((s - mean) ** 2 for s in speeds) / len(speeds) # Tire still good, estimate based on compound
# Map variance to 0..1; assume 0..(30 km/h)^2 typical range if tire_compound == "soft":
norm = min(1.0, var / (30.0 ** 2)) laps_remaining = max(0, 18 - tire_age)
return max(0.0, 1.0 - norm) elif tire_compound == "medium":
laps_remaining = max(0, 28 - tire_age)
else: # hard
laps_remaining = max(0, 38 - tire_age)
def _compute_weather_impact(self, rain_prob: Optional[float], track_temp: Optional[float]) -> str: pit_lap = min(current_lap + laps_remaining, total_laps - 5)
score = 0.0 return [max(current_lap + 1, pit_lap - 2), pit_lap + 2]
if isinstance(rain_prob, (int, float)):
score += 0.7 * float(rain_prob) def _compute_performance_delta(self, current_lap_time: float) -> float:
if isinstance(track_temp, (int, float)): """
if track_temp < 12: # cold tires harder Calculate performance delta vs baseline lap time.
score += 0.2 Negative = slower than baseline, Positive = faster.
if track_temp > 45: # overheating """
score += 0.2 if self._baseline_lap_time is None:
if score < 0.3: return 0.0
return "low"
if score < 0.6: return self._baseline_lap_time - current_lap_time # Negative if slower
return "medium"
return "high" def _estimate_fuel(self, current_lap: int, total_laps: int) -> float:
"""Estimate remaining fuel percentage based on lap progression."""
return max(0.0, 100.0 * (1.0 - (current_lap / total_laps)))
def _parse_lap_time(self, lap_time_str: Optional[str]) -> float:
"""Convert lap time string to seconds."""
if not lap_time_str:
return 90.0 # Default ~1:30
try:
# Handle pandas Timedelta string format
td = pd.to_timedelta(lap_time_str)
return td.total_seconds()
except:
return 90.0

View File

@@ -1,8 +1,11 @@
fastapi==0.115.2 fastapi==0.115.2
uvicorn==0.31.1 uvicorn==0.31.1
httpx==0.27.2 httpx==0.27.2
aiohttp==3.10.10
elevenlabs==2.18.0 elevenlabs==2.18.0
python-dotenv==1.1.1 python-dotenv==1.1.1
fastf1==3.6.1 fastf1==3.6.1
pandas==2.3.3 pandas==2.3.3
requests==2.32.5 requests==2.32.5
websockets==13.1
pydantic==2.9.2

View File

@@ -1,13 +1,14 @@
""" """
Raspberry Pi Telemetry Stream Simulator Raspberry Pi Telemetry Stream Simulator - Lap-Level Data
Reads the ALONSO_2023_MONZA_RACE CSV file row by row and simulates Reads the ALONSO_2023_MONZA_LAPS.csv file lap by lap and simulates
live telemetry streaming from a Raspberry Pi sensor. live telemetry streaming from a Raspberry Pi sensor.
Sends data to the HPC simulation layer via HTTP POST at intervals Sends data to the HPC simulation layer via HTTP POST at fixed
determined by the time differences between consecutive rows. 1-minute intervals between laps.
Usage: Usage:
python simulate_pi_stream.py --data ALONSO_2023_MONZA_RACE --speed 1.0 python simulate_pi_stream.py
python simulate_pi_stream.py --interval 30 # 30 seconds between laps
""" """
import argparse import argparse
@@ -19,39 +20,32 @@ import pandas as pd
import requests import requests
def load_telemetry_csv(filepath: Path) -> pd.DataFrame: def load_lap_csv(filepath: Path) -> pd.DataFrame:
"""Load telemetry data from CSV file.""" """Load lap-level telemetry data from CSV file."""
df = pd.read_csv(filepath, index_col=0) df = pd.read_csv(filepath)
# Convert overall_time to timedelta if it's not already # Convert lap_time to timedelta if it's not already
if df['overall_time'].dtype == 'object': if 'lap_time' in df.columns and df['lap_time'].dtype == 'object':
df['overall_time'] = pd.to_timedelta(df['overall_time']) df['lap_time'] = pd.to_timedelta(df['lap_time'])
print(f"✓ Loaded {len(df)} telemetry points from {filepath}") print(f"✓ Loaded {len(df)} laps from {filepath}")
print(f" Laps: {df['lap_number'].min():.0f}{df['lap_number'].max():.0f}") print(f" Laps: {df['lap_number'].min():.0f}{df['lap_number'].max():.0f}")
print(f" Duration: {df['overall_time'].iloc[-1]}")
return df return df
def row_to_json(row: pd.Series) -> Dict[str, Any]: def lap_to_json(row: pd.Series) -> Dict[str, Any]:
"""Convert a DataFrame row to a JSON-compatible dictionary.""" """Convert a lap DataFrame row to a JSON-compatible dictionary."""
data = { data = {
'lap_number': int(row['lap_number']) if pd.notna(row['lap_number']) else None, 'lap_number': int(row['lap_number']) if pd.notna(row['lap_number']) else None,
'total_laps': int(row['total_laps']) if pd.notna(row['total_laps']) else None, 'total_laps': int(row['total_laps']) if pd.notna(row['total_laps']) else None,
'speed': float(row['speed']) if pd.notna(row['speed']) else 0.0, 'lap_time': str(row['lap_time']) if pd.notna(row['lap_time']) else None,
'throttle': float(row['throttle']) if pd.notna(row['throttle']) else 0.0, 'average_speed': float(row['average_speed']) if pd.notna(row['average_speed']) else 0.0,
'brake': bool(row['brake']), 'max_speed': float(row['max_speed']) if pd.notna(row['max_speed']) else 0.0,
'tire_compound': str(row['tire_compound']) if pd.notna(row['tire_compound']) else 'UNKNOWN', 'tire_compound': str(row['tire_compound']) if pd.notna(row['tire_compound']) else 'UNKNOWN',
'tire_life_laps': float(row['tire_life_laps']) if pd.notna(row['tire_life_laps']) else 0.0, 'tire_life_laps': int(row['tire_life_laps']) if pd.notna(row['tire_life_laps']) else 0,
'track_temperature': float(row['track_temperature']) if pd.notna(row['track_temperature']) else 0.0, 'track_temperature': float(row['track_temperature']) if pd.notna(row['track_temperature']) else 0.0,
'rainfall': bool(row['rainfall']), 'rainfall': bool(row['rainfall'])
# Additional race context fields
'track_name': 'Monza', # From ALONSO_2023_MONZA_RACE
'driver_name': 'Alonso',
'current_position': 5, # Mock position, could be varied
'fuel_level': max(0.1, 1.0 - (float(row['lap_number']) / float(row['total_laps']) * 0.8)) if pd.notna(row['lap_number']) and pd.notna(row['total_laps']) else 0.5,
} }
return data return data
@@ -59,17 +53,17 @@ def row_to_json(row: pd.Series) -> Dict[str, Any]:
def simulate_stream( def simulate_stream(
df: pd.DataFrame, df: pd.DataFrame,
endpoint: str, endpoint: str,
speed: float = 1.0, interval: int = 60,
start_lap: int = 1, start_lap: int = 1,
end_lap: int = None end_lap: int = None
): ):
""" """
Simulate live telemetry streaming based on actual time intervals in the data. Simulate live telemetry streaming with fixed interval between laps.
Args: Args:
df: DataFrame with telemetry data df: DataFrame with lap-level telemetry data
endpoint: HPC API endpoint URL endpoint: HPC API endpoint URL
speed: Playback speed multiplier (1.0 = real-time, 2.0 = 2x speed) interval: Fixed interval in seconds between laps (default: 60 seconds)
start_lap: Starting lap number start_lap: Starting lap number
end_lap: Ending lap number (None = all laps) end_lap: Ending lap number (None = all laps)
""" """
@@ -79,95 +73,84 @@ def simulate_stream(
filtered_df = filtered_df[filtered_df['lap_number'] <= end_lap].copy() filtered_df = filtered_df[filtered_df['lap_number'] <= end_lap].copy()
if len(filtered_df) == 0: if len(filtered_df) == 0:
print("❌ No telemetry points in specified lap range") print("❌ No laps in specified lap range")
return return
# Reset index for easier iteration # Reset index for easier iteration
filtered_df = filtered_df.reset_index(drop=True) filtered_df = filtered_df.reset_index(drop=True)
print(f"\n🏁 Starting telemetry stream simulation") print(f"\n🏁 Starting lap-level telemetry stream simulation")
print(f" Endpoint: {endpoint}") print(f" Endpoint: {endpoint}")
print(f" Laps: {start_lap}{end_lap or 'end'}") print(f" Laps: {start_lap}{end_lap or 'end'}")
print(f" Speed: {speed}x") print(f" Interval: {interval} seconds between laps")
print(f" Points: {len(filtered_df)}") print(f" Total laps: {len(filtered_df)}")
print(f" Est. duration: {len(filtered_df) * interval / 60:.1f} minutes\n")
total_duration = (filtered_df['overall_time'].iloc[-1] - filtered_df['overall_time'].iloc[0]).total_seconds()
print(f" Duration: {total_duration:.1f}s (real-time) → {total_duration / speed:.1f}s (playback)\n")
sent_count = 0 sent_count = 0
error_count = 0 error_count = 0
current_lap = start_lap
try: try:
for i in range(len(filtered_df)): for i in range(len(filtered_df)):
row = filtered_df.iloc[i] row = filtered_df.iloc[i]
lap_num = int(row['lap_number'])
# Calculate sleep time based on time difference to next row # Convert lap to JSON
if i < len(filtered_df) - 1: lap_data = lap_to_json(row)
next_row = filtered_df.iloc[i + 1]
time_diff = (next_row['overall_time'] - row['overall_time']).total_seconds()
sleep_time = time_diff / speed
# Ensure positive sleep time # Send lap data
if sleep_time < 0:
sleep_time = 0
else:
sleep_time = 0
# Convert row to JSON
telemetry_point = row_to_json(row)
# Send telemetry point
try: try:
response = requests.post( response = requests.post(
endpoint, endpoint,
json=telemetry_point, json=lap_data,
timeout=2.0 timeout=5.0
) )
if response.status_code == 200: if response.status_code == 200:
sent_count += 1 sent_count += 1
progress = (i + 1) / len(filtered_df) * 100
# Print progress updates # Print lap info
if row['lap_number'] > current_lap: print(f" 📡 Lap {lap_num}/{int(row['total_laps'])}: "
current_lap = row['lap_number'] f"Avg Speed: {row['average_speed']:.1f} km/h, "
progress = (i + 1) / len(filtered_df) * 100 f"Tire: {row['tire_compound']} (age: {int(row['tire_life_laps'])} laps) "
print(f" 📡 Lap {int(current_lap)}: {sent_count} points sent " f"[{progress:.0f}%]")
f"({progress:.1f}% complete)")
elif sent_count % 500 == 0: # Show response if it contains strategies
progress = (i + 1) / len(filtered_df) * 100 try:
print(f" 📡 Lap {int(row['lap_number'])}: {sent_count} points sent " response_data = response.json()
f"({progress:.1f}% complete)") if 'strategies_generated' in response_data:
print(f" ✓ Generated {response_data['strategies_generated']} strategies")
except:
pass
else: else:
error_count += 1 error_count += 1
print(f" ⚠ HTTP {response.status_code}: {response.text[:50]}") print(f" Lap {lap_num}: HTTP {response.status_code}: {response.text[:100]}")
except requests.RequestException as e: except requests.RequestException as e:
error_count += 1 error_count += 1
if error_count % 10 == 0: print(f" ⚠ Lap {lap_num}: Connection error: {str(e)[:100]}")
print(f" ⚠ Connection error ({error_count} total): {str(e)[:50]}")
# Sleep until next point should be sent # Sleep for fixed interval before next lap (except for last lap)
if sleep_time > 0: if i < len(filtered_df) - 1:
time.sleep(sleep_time) time.sleep(interval)
print(f"\n✅ Stream complete!") print(f"\n✅ Stream complete!")
print(f" Sent: {sent_count} points") print(f" Sent: {sent_count} laps")
print(f" Errors: {error_count}") print(f" Errors: {error_count}")
except KeyboardInterrupt: except KeyboardInterrupt:
print(f"\n⏸ Stream interrupted by user") print(f"\n⏸ Stream interrupted by user")
print(f" Sent: {sent_count}/{len(filtered_df)} points") print(f" Sent: {sent_count}/{len(filtered_df)} laps")
def main(): def main():
parser = argparse.ArgumentParser( parser = argparse.ArgumentParser(
description="Simulate Raspberry Pi telemetry streaming from CSV data" description="Simulate Raspberry Pi lap-level telemetry streaming"
) )
parser.add_argument("--endpoint", type=str, default="http://localhost:8000/telemetry", parser.add_argument("--endpoint", type=str, default="http://localhost:8000/ingest/telemetry",
help="HPC API endpoint") help="HPC API endpoint (default: http://localhost:8000/ingest/telemetry)")
parser.add_argument("--speed", type=float, default=1.0, parser.add_argument("--interval", type=int, default=60,
help="Playback speed (1.0 = real-time, 10.0 = 10x speed)") help="Fixed interval in seconds between laps (default: 60)")
parser.add_argument("--start-lap", type=int, default=1, help="Starting lap number") parser.add_argument("--start-lap", type=int, default=1, help="Starting lap number")
parser.add_argument("--end-lap", type=int, default=None, help="Ending lap number") parser.add_argument("--end-lap", type=int, default=None, help="Ending lap number")
@@ -176,19 +159,19 @@ def main():
try: try:
# Hardcoded CSV file location in the same folder as this script # Hardcoded CSV file location in the same folder as this script
script_dir = Path(__file__).parent script_dir = Path(__file__).parent
data_path = script_dir / "ALONSO_2023_MONZA_RACE.csv" data_path = script_dir / "ALONSO_2023_MONZA_LAPS.csv"
df = load_telemetry_csv(data_path) df = load_lap_csv(data_path)
simulate_stream( simulate_stream(
df, df,
args.endpoint, args.endpoint,
args.speed, args.interval,
args.start_lap, args.start_lap,
args.end_lap args.end_lap
) )
except FileNotFoundError: except FileNotFoundError:
print(f"❌ File not found: {data_path}") print(f"❌ File not found: {data_path}")
print(f" Make sure ALONSO_2023_MONZA_RACE.csv is in the scripts/ folder") print(f" Make sure ALONSO_2023_MONZA_LAPS.csv is in the scripts/ folder")
sys.exit(1) sys.exit(1)
except Exception as e: except Exception as e:
print(f"❌ Error: {e}") print(f"❌ Error: {e}")

View File

@@ -0,0 +1,403 @@
#!/usr/bin/env python3
"""
WebSocket-based Raspberry Pi Telemetry Simulator.
Connects to AI Intelligence Layer via WebSocket and:
1. Streams lap telemetry to AI layer
2. Receives control commands (brake_bias, differential_slip) from AI layer
3. Applies control adjustments in real-time
Usage:
python simulate_pi_websocket.py --interval 5 --ws-url ws://localhost:9000/ws/pi
"""
from __future__ import annotations
import argparse
import asyncio
import json
import logging
from pathlib import Path
from typing import Dict, Any, Optional
import sys
try:
import pandas as pd
import websockets
from websockets.client import WebSocketClientProtocol
except ImportError:
print("Error: Required packages not installed.")
print("Run: pip install pandas websockets")
sys.exit(1)
# Configure logging
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
)
logger = logging.getLogger(__name__)
class PiSimulator:
"""WebSocket-based Pi simulator with control feedback."""
def __init__(self, csv_path: Path, ws_url: str, interval: float = 60.0, enrichment_url: str = "http://localhost:8000"):
self.csv_path = csv_path
self.ws_url = ws_url
self.enrichment_url = enrichment_url
self.interval = interval
self.df: Optional[pd.DataFrame] = None
self.current_controls = {
"brake_bias": 5,
"differential_slip": 5
}
def load_lap_csv(self) -> pd.DataFrame:
"""Load lap-level CSV data."""
logger.info(f"Loading CSV from {self.csv_path}")
df = pd.read_csv(self.csv_path)
logger.info(f"Loaded {len(df)} laps")
return df
def lap_to_raw_payload(self, row: pd.Series) -> Dict[str, Any]:
"""
Convert CSV row to raw lap telemetry (for enrichment service).
This is what the real Pi would send.
"""
return {
"lap_number": int(row["lap_number"]),
"total_laps": int(row["total_laps"]),
"lap_time": str(row["lap_time"]),
"average_speed": float(row["average_speed"]),
"max_speed": float(row["max_speed"]),
"tire_compound": str(row["tire_compound"]),
"tire_life_laps": int(row["tire_life_laps"]),
"track_temperature": float(row["track_temperature"]),
"rainfall": bool(row.get("rainfall", False))
}
async def enrich_telemetry(self, raw_telemetry: Dict[str, Any]) -> Dict[str, Any]:
"""
Send raw telemetry to enrichment service and get back enriched data.
This simulates the Pi → Enrichment → AI flow.
"""
import aiohttp
try:
async with aiohttp.ClientSession() as session:
async with session.post(
f"{self.enrichment_url}/ingest/telemetry",
json=raw_telemetry,
timeout=aiohttp.ClientTimeout(total=5.0)
) as response:
if response.status == 200:
result = await response.json()
logger.info(f" ✓ Enrichment service processed lap {raw_telemetry['lap_number']}")
return result
else:
logger.error(f" ✗ Enrichment service error: {response.status}")
return None
except Exception as e:
logger.error(f" ✗ Failed to connect to enrichment service: {e}")
logger.error(f" Make sure enrichment service is running: python scripts/serve.py")
return None
def lap_to_enriched_payload(self, row: pd.Series) -> Dict[str, Any]:
"""
Convert CSV row to enriched telemetry payload.
Simulates the enrichment layer output.
"""
# Basic enrichment simulation (would normally come from enrichment service)
lap_number = int(row["lap_number"])
tire_age = int(row["tire_life_laps"])
# Simple tire degradation simulation
tire_deg_rate = min(1.0, 0.02 * tire_age)
tire_cliff_risk = max(0.0, min(1.0, (tire_age - 20) / 10.0))
# Pace trend (simplified)
pace_trend = "stable"
if tire_age > 25:
pace_trend = "declining"
elif tire_age < 5:
pace_trend = "improving"
# Optimal pit window
if tire_age > 20:
pit_window = [lap_number + 1, lap_number + 3]
else:
pit_window = [lap_number + 10, lap_number + 15]
# Performance delta (random for simulation)
import random
performance_delta = random.uniform(-1.5, 1.0)
enriched_telemetry = {
"lap": lap_number,
"tire_degradation_rate": round(tire_deg_rate, 3),
"pace_trend": pace_trend,
"tire_cliff_risk": round(tire_cliff_risk, 3),
"optimal_pit_window": pit_window,
"performance_delta": round(performance_delta, 2)
}
race_context = {
"race_info": {
"track_name": "Monza",
"total_laps": int(row["total_laps"]),
"current_lap": lap_number,
"weather_condition": "Wet" if row.get("rainfall", False) else "Dry",
"track_temp_celsius": float(row["track_temperature"])
},
"driver_state": {
"driver_name": "Alonso",
"current_position": 5,
"current_tire_compound": str(row["tire_compound"]).lower(),
"tire_age_laps": tire_age,
"fuel_remaining_percent": max(0.0, 100.0 * (1.0 - (lap_number / int(row["total_laps"]))))
},
"competitors": []
}
return {
"type": "telemetry",
"lap_number": lap_number,
"enriched_telemetry": enriched_telemetry,
"race_context": race_context
}
async def stream_telemetry(self):
"""Main WebSocket streaming loop."""
self.df = self.load_lap_csv()
# Reset enrichment service state for fresh session
logger.info(f"Resetting enrichment service state...")
try:
import aiohttp
async with aiohttp.ClientSession() as session:
async with session.post(
f"{self.enrichment_url}/reset",
timeout=aiohttp.ClientTimeout(total=5.0)
) as response:
if response.status == 200:
logger.info("✓ Enrichment service reset successfully")
else:
logger.warning(f"⚠ Enrichment reset returned status {response.status}")
except Exception as e:
logger.warning(f"⚠ Could not reset enrichment service: {e}")
logger.warning(" Continuing anyway (enricher may have stale state)")
logger.info(f"Connecting to WebSocket: {self.ws_url}")
try:
async with websockets.connect(self.ws_url) as websocket:
logger.info("WebSocket connected!")
# Wait for welcome message
welcome = await websocket.recv()
logger.info(f"Received: {welcome}")
# Stream each lap
for idx, row in self.df.iterrows():
lap_number = int(row["lap_number"])
logger.info(f"\n{'='*60}")
logger.info(f"Lap {lap_number}/{int(row['total_laps'])}")
logger.info(f"{'='*60}")
# Build raw telemetry payload (what real Pi would send)
raw_telemetry = self.lap_to_raw_payload(row)
logger.info(f"[RAW] Lap {lap_number} telemetry prepared")
# Send to enrichment service for processing
enriched_data = await self.enrich_telemetry(raw_telemetry)
if not enriched_data:
logger.error("Failed to get enrichment, skipping lap")
await asyncio.sleep(self.interval)
continue
# Extract enriched telemetry and race context from enrichment service
enriched_telemetry = enriched_data.get("enriched_telemetry")
race_context = enriched_data.get("race_context")
if not enriched_telemetry or not race_context:
logger.error("Invalid enrichment response, skipping lap")
await asyncio.sleep(self.interval)
continue
# Build WebSocket payload for AI layer
ws_payload = {
"type": "telemetry",
"lap_number": lap_number,
"enriched_telemetry": enriched_telemetry,
"race_context": race_context
}
# Send enriched telemetry to AI layer via WebSocket
await websocket.send(json.dumps(ws_payload))
logger.info(f"[SENT] Lap {lap_number} enriched telemetry to AI layer")
# Wait for control command response(s)
try:
response = await asyncio.wait_for(websocket.recv(), timeout=5.0)
response_data = json.loads(response)
if response_data.get("type") == "control_command":
brake_bias = response_data.get("brake_bias", 5)
diff_slip = response_data.get("differential_slip", 5)
strategy_name = response_data.get("strategy_name", "N/A")
message = response_data.get("message")
self.current_controls["brake_bias"] = brake_bias
self.current_controls["differential_slip"] = diff_slip
logger.info(f"[RECEIVED] Control Command:")
logger.info(f" ├─ Brake Bias: {brake_bias}/10")
logger.info(f" ├─ Differential Slip: {diff_slip}/10")
if strategy_name != "N/A":
logger.info(f" └─ Strategy: {strategy_name}")
if message:
logger.info(f" └─ {message}")
# Apply controls (in real Pi, this would adjust hardware)
self.apply_controls(brake_bias, diff_slip)
# If message indicates processing, wait for update
if message and "Processing" in message:
logger.info(" AI is generating strategies, waiting for update...")
try:
update = await asyncio.wait_for(websocket.recv(), timeout=45.0)
update_data = json.loads(update)
if update_data.get("type") == "control_command_update":
brake_bias = update_data.get("brake_bias", 5)
diff_slip = update_data.get("differential_slip", 5)
strategy_name = update_data.get("strategy_name", "N/A")
self.current_controls["brake_bias"] = brake_bias
self.current_controls["differential_slip"] = diff_slip
logger.info(f"[UPDATED] Strategy-Based Control:")
logger.info(f" ├─ Brake Bias: {brake_bias}/10")
logger.info(f" ├─ Differential Slip: {diff_slip}/10")
logger.info(f" └─ Strategy: {strategy_name}")
self.apply_controls(brake_bias, diff_slip)
except asyncio.TimeoutError:
logger.warning("[TIMEOUT] Strategy generation took too long")
elif response_data.get("type") == "error":
logger.error(f"[ERROR] {response_data.get('message')}")
except asyncio.TimeoutError:
logger.warning("[TIMEOUT] No control command received within 5s")
# Wait before next lap
logger.info(f"Waiting {self.interval}s before next lap...")
await asyncio.sleep(self.interval)
# All laps complete
logger.info("\n" + "="*60)
logger.info("RACE COMPLETE - All laps streamed")
logger.info("="*60)
# Send disconnect message
await websocket.send(json.dumps({"type": "disconnect"}))
except websockets.exceptions.WebSocketException as e:
logger.error(f"WebSocket error: {e}")
logger.error("Is the AI Intelligence Layer running on port 9000?")
except Exception as e:
logger.error(f"Unexpected error: {e}")
def apply_controls(self, brake_bias: int, differential_slip: int):
"""
Apply control adjustments to the car.
In real Pi, this would interface with hardware controllers.
"""
logger.info(f"[APPLYING] Setting brake_bias={brake_bias}, diff_slip={differential_slip}")
# Simulate applying controls (in real implementation, this would:
# - Adjust brake bias actuator
# - Modify differential slip controller
# - Send CAN bus messages to ECU
# - Update dashboard display)
# For simulation, just log the change
if brake_bias > 6:
logger.info(" → Brake bias shifted REAR (protecting front tires)")
elif brake_bias < 5:
logger.info(" → Brake bias shifted FRONT (aggressive turn-in)")
else:
logger.info(" → Brake bias NEUTRAL")
if differential_slip > 6:
logger.info(" → Differential slip INCREASED (gentler on tires)")
elif differential_slip < 5:
logger.info(" → Differential slip DECREASED (aggressive cornering)")
else:
logger.info(" → Differential slip NEUTRAL")
async def main():
parser = argparse.ArgumentParser(
description="WebSocket-based Raspberry Pi Telemetry Simulator"
)
parser.add_argument(
"--interval",
type=float,
default=60.0,
help="Seconds between laps (default: 60s)"
)
parser.add_argument(
"--ws-url",
type=str,
default="ws://localhost:9000/ws/pi",
help="WebSocket URL for AI layer (default: ws://localhost:9000/ws/pi)"
)
parser.add_argument(
"--enrichment-url",
type=str,
default="http://localhost:8000",
help="Enrichment service URL (default: http://localhost:8000)"
)
parser.add_argument(
"--csv",
type=str,
default=None,
help="Path to lap CSV file (default: scripts/ALONSO_2023_MONZA_LAPS.csv)"
)
args = parser.parse_args()
# Determine CSV path
if args.csv:
csv_path = Path(args.csv)
else:
script_dir = Path(__file__).parent
csv_path = script_dir / "ALONSO_2023_MONZA_LAPS.csv"
if not csv_path.exists():
logger.error(f"CSV file not found: {csv_path}")
sys.exit(1)
# Create simulator and run
simulator = PiSimulator(
csv_path=csv_path,
ws_url=args.ws_url,
enrichment_url=args.enrichment_url,
interval=args.interval
)
logger.info("Starting WebSocket Pi Simulator")
logger.info(f"CSV: {csv_path}")
logger.info(f"Enrichment Service: {args.enrichment_url}")
logger.info(f"WebSocket URL: {args.ws_url}")
logger.info(f"Interval: {args.interval}s per lap")
logger.info("-" * 60)
await simulator.stream_telemetry()
if __name__ == "__main__":
asyncio.run(main())

157
scripts/test_websocket.py Normal file
View File

@@ -0,0 +1,157 @@
#!/usr/bin/env python3
"""
Quick test to verify WebSocket control system.
Tests the complete flow: Pi → AI → Control Commands
"""
import asyncio
import json
import sys
try:
import websockets
except ImportError:
print("Error: websockets not installed")
print("Run: pip install websockets")
sys.exit(1)
async def test_websocket():
"""Test WebSocket connection and control flow."""
ws_url = "ws://localhost:9000/ws/pi"
print(f"Testing WebSocket connection to {ws_url}")
print("-" * 60)
try:
async with websockets.connect(ws_url) as websocket:
print("✓ WebSocket connected!")
# 1. Receive welcome message
welcome = await websocket.recv()
welcome_data = json.loads(welcome)
print(f"✓ Welcome message: {welcome_data.get('message')}")
# 2. Send test telemetry (lap 1)
test_payload = {
"type": "telemetry",
"lap_number": 1,
"enriched_telemetry": {
"lap": 1,
"tire_degradation_rate": 0.15,
"pace_trend": "stable",
"tire_cliff_risk": 0.05,
"optimal_pit_window": [25, 30],
"performance_delta": 0.0
},
"race_context": {
"race_info": {
"track_name": "Monza",
"total_laps": 51,
"current_lap": 1,
"weather_condition": "Dry",
"track_temp_celsius": 28.0
},
"driver_state": {
"driver_name": "Test Driver",
"current_position": 5,
"current_tire_compound": "medium",
"tire_age_laps": 1,
"fuel_remaining_percent": 98.0
},
"competitors": []
}
}
print("\n→ Sending lap 1 telemetry...")
await websocket.send(json.dumps(test_payload))
# 3. Wait for response (short timeout for first laps)
response = await asyncio.wait_for(websocket.recv(), timeout=5.0)
response_data = json.loads(response)
if response_data.get("type") == "control_command":
print("✓ Received control command!")
print(f" Brake Bias: {response_data.get('brake_bias')}/10")
print(f" Differential Slip: {response_data.get('differential_slip')}/10")
print(f" Message: {response_data.get('message', 'N/A')}")
else:
print(f"✗ Unexpected response: {response_data}")
# 4. Send two more laps to trigger strategy generation
for lap_num in [2, 3]:
test_payload["lap_number"] = lap_num
test_payload["enriched_telemetry"]["lap"] = lap_num
test_payload["race_context"]["race_info"]["current_lap"] = lap_num
print(f"\n→ Sending lap {lap_num} telemetry...")
await websocket.send(json.dumps(test_payload))
# Lap 3 triggers Gemini, so expect two responses
if lap_num == 3:
print(f" (lap 3 will trigger strategy generation - may take 10-30s)")
# First response: immediate acknowledgment
response1 = await asyncio.wait_for(websocket.recv(), timeout=5.0)
response1_data = json.loads(response1)
print(f"✓ Immediate response: {response1_data.get('message', 'Processing...')}")
# Second response: strategy-based controls
print(" Waiting for strategy generation to complete...")
response2 = await asyncio.wait_for(websocket.recv(), timeout=45.0)
response2_data = json.loads(response2)
if response2_data.get("type") == "control_command_update":
print(f"✓ Lap {lap_num} strategy-based control received!")
print(f" Brake Bias: {response2_data.get('brake_bias')}/10")
print(f" Differential Slip: {response2_data.get('differential_slip')}/10")
strategy = response2_data.get('strategy_name')
if strategy and strategy != "N/A":
print(f" Strategy: {strategy}")
print(f" Total Strategies: {response2_data.get('total_strategies')}")
print("✓ Strategy generation successful!")
else:
print(f"✗ Unexpected response: {response2_data}")
else:
# Laps 1-2: just one response
response = await asyncio.wait_for(websocket.recv(), timeout=5.0)
response_data = json.loads(response)
if response_data.get("type") == "control_command":
print(f"✓ Lap {lap_num} control command received!")
print(f" Brake Bias: {response_data.get('brake_bias')}/10")
print(f" Differential Slip: {response_data.get('differential_slip')}/10")
# 5. Disconnect
print("\n→ Sending disconnect...")
await websocket.send(json.dumps({"type": "disconnect"}))
print("\n" + "=" * 60)
print("✓ ALL TESTS PASSED!")
print("=" * 60)
print("\nWebSocket control system is working correctly.")
print("Ready to run: python scripts/simulate_pi_websocket.py")
except websockets.exceptions.WebSocketException as e:
print(f"\n✗ WebSocket error: {e}")
print("\nMake sure the AI Intelligence Layer is running:")
print(" cd ai_intelligence_layer && python main.py")
sys.exit(1)
except asyncio.TimeoutError:
print("\n✗ Timeout waiting for response")
print("AI layer may be processing (Gemini API can be slow)")
print("Check ai_intelligence_layer logs for details")
sys.exit(1)
except Exception as e:
print(f"\n✗ Unexpected error: {e}")
import traceback
traceback.print_exc()
sys.exit(1)
if __name__ == "__main__":
print("WebSocket Control System Test")
print("=" * 60)
asyncio.run(test_websocket())