Guardrails AI is a framework for adding validation and safety rails to Large Language Model outputs. Langtrace provides native integration with Guardrails, automatically capturing traces with validator information and metadata to help you monitor and analyze your model’s performance and validation results.