RouterV2 Architecture
Detailed system architecture and design patterns for RouterV2
🏗 System Architecture Overview
RouterV2 implements a sophisticated multi-stage pipeline architecture designed for high-performance request processing and intelligent routing decisions. The system is built around the concept of parallel processing with multiple stages that can execute concurrently while maintaining data consistency and flow control.
🔄 Pipeline Architecture
Core Pipeline Design
graph TD
A[HTTP Request] --> B[Handler.process()]
B --> C[Pipeline.getEmptyCtx()]
C --> D[Pipeline.execute()]
D --> E[ParallelProcess Engine]
E --> F[S00: Initialize]
E --> G[S01: Prepare Input]
E --> H[S02: Check Redirect]
E --> I[S03: Load External Dependencies]
E --> J[S04: Process Data]
E --> K[S05: Make Decision]
E --> L[S06: Check Overflow]
E --> M[S07: Get Content]
E --> N[S08: Create Response]
E --> O[S09: Save Response]
E --> P[S10: Verbose Response]
P --> Q[HTTP Response]
Message Context (msgCtx) Flow
The pipeline operates on a message context object that flows through all stages:
// Message Context Structure
{
details: {
requestId: "unique-request-id",
sessionId: "session-identifier",
timestamp: "2025-01-01T12:00:00.000Z"
},
input: {
query: { /* query parameters */ },
headers: { /* HTTP headers */ },
body: { /* request body */ }
},
data: {
campaign: { /* campaign data */ },
brand: { /* brand information */ },
product: { /* product details */ }
},
response: {
type: "redirect|content|ads|widget",
content: { /* response content */ },
headers: { /* response headers */ },
cookies: { /* cookies to set */ }
},
cache: {
session: { /* session cache data */ },
document: { /* document cache data */ }
},
status: "SUCCESS|FAILED|REDIRECT"
}
🧩 Component Architecture
1. Handler Layer
graph LR
A[HTTP Request] --> B[Handler]
B --> C[Pipeline Executor]
B --> D[Context Manager]
B --> E[Logger]
C --> F[Response]
Responsibilities: - Request/response lifecycle management - Pipeline execution coordination - Error handling and logging - Resource cleanup (cache connections)
2. Pipeline Engine
graph TD
A[Pipeline Engine] --> B[Stage Orchestrator]
B --> C[Parallel Processor]
C --> D[Stage S00-S10]
D --> E[Context Updater]
E --> F[Next Stage]
Key Features: - Parallel stage execution where possible - Context state management - Error propagation and recovery - Performance monitoring
3. Service Layer Architecture
Cache Services
graph LR
A[CacheService] --> B[Redis Client]
A --> C[Document Cache]
A --> D[Content Cache]
B --> E[Session Data]
C --> F[Campaign Config]
D --> G[Content Templates]
Decision Services
graph LR
A[DecisionService] --> B[External Decision API]
A --> C[Rules Engine]
A --> D[Campaign Matcher]
B --> E[Routing Decision]
C --> F[Business Rules]
D --> G[Campaign Selection]
Content Services
graph LR
A[ContentCache] --> B[Template Engine]
A --> C[Static Content]
A --> D[Dynamic Content]
B --> E[Mustache Templates]
C --> F[CSS/JS/Images]
D --> G[Generated HTML]
🔀 Data Flow Architecture
Request Processing Flow
sequenceDiagram
participant Client
participant Handler
participant Pipeline
participant Cache
participant Decision
participant Content
participant DB
Client->>Handler: HTTP Request
Handler->>Pipeline: Initialize Context
Pipeline->>Cache: Load Session Data
Cache-->>Pipeline: Session Info
Pipeline->>Cache: Load Campaign Data
Cache-->>Pipeline: Campaign Config
Pipeline->>Decision: Make Routing Decision
Decision-->>Pipeline: Decision Result
Pipeline->>Content: Generate Content
Content-->>Pipeline: Content Data
Pipeline->>DB: Save Response (if needed)
Pipeline->>Handler: Complete Context
Handler-->>Client: HTTP Response
Parallel Processing Model
graph TD
A[Stage Input] --> B{Can Process in Parallel?}
B -->|Yes| C[Parallel Execution]
B -->|No| D[Sequential Execution]
C --> E[Sub-stage 1]
C --> F[Sub-stage 2]
C --> G[Sub-stage 3]
E --> H[Merge Results]
F --> H
G --> H
D --> I[Single Thread]
I --> H
H --> J[Stage Output]
🎯 Decision Engine Architecture
Decision Making Process
graph TD
A[Request Parameters] --> B[Decision Service]
B --> C[Parameter Extraction]
C --> D[External Decision API]
D --> E[Decision Response]
E --> F[Route Determination]
C --> G[Campaign Matching]
G --> H[Experience Type Selection]
H --> I[Content Type Decision]
F --> J[Final Routing Decision]
I --> J
Decision Parameters
// Decision API Parameters
{
wizsid: "session-identifier",
product: "auto-insurance",
brandfile: "brand-config.json",
mobile: true,
vendor: "publisher-name",
campaign: "campaign-id",
etype: "experience-type",
slideTree: "navigation-path"
}
💾 Caching Architecture
Multi-Layer Cache Strategy
graph TD
A[Request] --> B{Cache Layer 1: Redis}
B -->|Hit| C[Return Cached Data]
B -->|Miss| D{Cache Layer 2: Document Cache}
D -->|Hit| E[Update Redis & Return]
D -->|Miss| F{Cache Layer 3: Content Cache}
F -->|Hit| G[Update Caches & Return]
F -->|Miss| H[Generate Content]
H --> I[Update All Caches]
I --> J[Return Data]
Cache Types and Usage
1. Session Cache (Redis)
- Purpose: User session state and temporary data
- TTL: Configurable (typically 30 minutes)
- Data: Session variables, user preferences, temporary state
2. Document Cache
- Purpose: Campaign configurations and business rules
- TTL: Configurable (typically 5 minutes)
- Data: Campaign settings, brand configurations, routing rules
3. Content Cache
- Purpose: Generated content and templates
- TTL: Configurable (typically 1 hour)
- Data: Rendered HTML, CSS, JavaScript, images
🔧 Configuration Architecture
Configuration Hierarchy
graph TD
A[Base Config] --> B[Environment Config]
B --> C[Runtime Config]
C --> D[Dynamic Config]
A --> E[config.js]
B --> F[config.int.js / config.prod.js]
C --> G[Environment Variables]
D --> H[Document Cache Config]
Configuration Sources
- Static Configuration: Base configuration files
- Environment Configuration: Environment-specific overrides
- Runtime Configuration: Environment variables and command-line args
- Dynamic Configuration: Document cache and external configuration services
🚀 Performance Architecture
Optimization Strategies
1. Parallel Processing
- Multiple pipeline stages execute concurrently
- Independent operations run in parallel
- Dependency management ensures correct execution order
2. Intelligent Caching
- Multi-layer cache hierarchy
- Cache warming strategies
- Intelligent cache invalidation
3. Connection Pooling
- Redis connection pooling
- HTTP client connection reuse
- Database connection management
4. Resource Management
- Memory-efficient context handling
- Garbage collection optimization
- Resource cleanup on request completion
Performance Monitoring Points
graph LR
A[Request Start] --> B[Pipeline Init]
B --> C[Stage Execution]
C --> D[Cache Operations]
D --> E[External API Calls]
E --> F[Content Generation]
F --> G[Response Creation]
G --> H[Request End]
B --> I[Metrics Collection]
C --> I
D --> I
E --> I
F --> I
G --> I
🔒 Security Architecture
Security Layers
- Input Validation: Request parameter validation and sanitization
- Authentication: JWT token validation (when required)
- Authorization: Role-based access control
- Data Protection: Sensitive data encryption and masking
- Rate Limiting: Request rate limiting and throttling
Security Flow
graph TD
A[Request] --> B[Input Validation]
B --> C[Authentication Check]
C --> D[Authorization Check]
D --> E[Rate Limiting]
E --> F[Process Request]
F --> G[Data Sanitization]
G --> H[Response]
📈 Scalability Architecture
Horizontal Scaling
- Stateless service design
- Session state externalized to Redis
- Load balancer compatible
- Kubernetes pod auto-scaling
Vertical Scaling
- Efficient memory usage
- CPU optimization
- I/O optimization
- Resource monitoring and alerting
Next: Pipeline Documentation - Detailed pipeline stage documentation