Appearance
Architecture
Overview of Elsai Guardrails architecture and design.
System Architecture
Elsai Guardrails follows a layered architecture:
User Input
↓
Input Rails (Validation)
↓
LLM Processing
↓
Output Rails (Validation)
↓
User ResponseComponents
GuardrailSystem
Core component that performs safety checks:
- Toxicity detection
- Sensitive data detection
- Content classification
LLMRails
High-level component that integrates:
- LLM configuration and invocation
- Input validation
- Output validation
- Result aggregation
Configuration System
YAML-based configuration for:
- LLM settings
- Guardrail behavior
- Thresholds and rules
Data Flow
Input Processing
- User sends input
- Input rails validate content
- If validation fails → Block and return error
- If validation passes → Proceed to LLM
LLM Processing
- Format messages for LLM
- Invoke LLM API
- Receive response
Output Processing
- LLM generates response
- Output rails validate content
- If validation fails → Block and return error
- If validation passes → Return to user
Guardrail Checks
Toxicity Detection
- Uses remote API service
- Classifies content as toxic/offensive/non-toxic
- Configurable threshold
Sensitive Data Detection
- Uses BERT-based model
- Detects various PII types
- Configurable blocking
Content Classification
- Uses semantic routing
- Detects jailbreak, malicious, injection attempts
- Requires an embedding encoder (configurable via ENCODER_TYPE)
LLM Integration
Supports multiple providers through unified interface:
- OpenAI
- Azure OpenAI
- Anthropic
- Gemini
- AWS Bedrock
Configuration
Flexible configuration through:
- YAML files
- YAML strings
- Programmatic configuration
Next Steps
- FAQ - Common questions
