Skip to content

Elsai GuardrailsProgrammable Guardrails

for LLM-based Applications

Elsai Guardrails Architecture

Why Elsai Guardrails?

Elsai Guardrails provides a comprehensive solution for securing your LLM-based applications. With built-in protection against common threats and flexible configuration options, you can ensure your AI applications are safe and compliant.

Key Features

  • Toxicity Detection: Automatically detect and block offensive or harmful content
  • Sensitive Data Protection: Identify and protect personal information like emails, phone numbers, and credit cards
  • Content Classification: Detect jailbreak attempts, prompt injection, and malicious code using semantic routing
  • Multi-LLM Integration: Seamless integration with major LLM providers
  • Flexible Deployment: Use as a wrapper or perform separate input/output checks

Quick Example

python
from elsai_guardrails.guardrails import LLMRails

# Initialize with configuration
rails = LLMRails.from_config("config.yml")

# Safe LLM calls with automatic guardrails
response = rails.generate(
    messages=[{"role": "user", "content": "Hello!"}]
)

Get Started

Ready to secure your LLM application? Check out our Installation Guide to get started in minutes!

Released under the MIT License.