LLM Frameworks
Guardrails
Learn how to use Langtrace with Guardrails AI for enhanced LLM validation and tracing
Guardrails
Guardrails AI is a framework for adding validation and safety rails to Large Language Model outputs. Langtrace provides native integration with Guardrails, automatically capturing traces with validator information and metadata to help you monitor and analyze your model’s performance and validation results.
Setup
-
Install Langtrace’s SDK and initialize the SDK in your code.
-
Install Guardrails SDK and set up your validators according to the Guardrails documentation.
-
Initialize Langtrace in your code and implement Guardrails validation:
Monitoring Validation Results
Once implemented, Langtrace automatically captures:
- Validator execution and results
- Validation metadata
- Model inputs and outputs
- Validation failures and exceptions
View your traces in the Langtrace dashboard: