Guardrails

Guardrails AI is a framework for adding validation and safety rails to Large Language Model outputs. Langtrace provides native integration with Guardrails, automatically capturing traces with validator information and metadata to help you monitor and analyze your model’s performance and validation results.

Setup

  1. Install Langtrace’s SDK and initialize the SDK in your code.

  2. Install Guardrails SDK and set up your validators according to the Guardrails documentation.

  3. Initialize Langtrace in your code and implement Guardrails validation:

import os
from langtrace_python_sdk import langtrace
from guardrails import Guard, OnFailAction
from guardrails.hub import ProfanityFree, ToxicLanguage

langtrace.init()

guard = Guard()
guard.name = 'ChatBotGuard'
guard.use_many(
    ProfanityFree(on_fail=OnFailAction.EXCEPTION),
    ToxicLanguage(on_fail="exception"),
    on="output"
)

result = guard(
    messages=[{"role": "user", "content": "Hello, how are you?"}],
    model="gpt-4",
    stream=False,
)

Monitoring Validation Results

Once implemented, Langtrace automatically captures:

  • Validator execution and results
  • Validation metadata
  • Model inputs and outputs
  • Validation failures and exceptions

View your traces in the Langtrace dashboard:

Resources