Skip to content

Elsai GuardrailsProgrammable Guardrails

for LLM-based Applications

Elsai Guardrails Architecture

Why Elsai Guardrails?

Elsai Guardrails provides a comprehensive solution for securing your LLM-based applications. With built-in protection against common threats and flexible configuration options, you can ensure your AI applications are safe and compliant.

Key Features

  • Toxicity Detection: Automatically detect and block offensive or harmful content
  • Sensitive Data Protection: Identify and protect personal information like emails, phone numbers, and credit cards
  • Content Classification: Detect jailbreak attempts, prompt injection, and malicious code using semantic routing
  • Off-Topic Detection: Keep AI conversations focused by defining allowed topics and blocking off-topic inputs
  • SQL Syntax Validation: Validate SQL queries for major dialects (PostgreSQL, MySQL, SQLite, and more) before execution
  • Multi-LLM Integration: Seamless integration with major LLM providers
  • Flexible Deployment: Use as a wrapper or perform separate input/output checks

Quick Example

python
from elsai_guardrails.guardrails import LLMRails

# Initialize with configuration
rails = LLMRails.from_config("config.yml")

# Safe LLM calls with automatic guardrails
response = rails.generate(
    messages=[{"role": "user", "content": "Hello!"}]
)

What's New

Version 0.1.1 introduces powerful new features:

  • Off-Topic Detection - Keep conversations focused on allowed topics
  • SQL Syntax Validation - Validate SQL for 7 major dialects

See What's New | Release Notes

Get Started

Ready to secure your LLM application? Check out our Installation Guide to get started in minutes!

Released under the MIT License.