Cleanlab
Integrate Cleanlab TLM with Langtrace for LLM observability
Cleanlab Integration
Cleanlab provides tools for evaluating and improving the quality of LLM outputs. The Cleanlab TLM (Trustworthy Language Model) library helps you assess the trustworthiness of LLM responses and provides explanations for its evaluations.
This guide shows you how to integrate Cleanlab TLM with Langtrace to monitor and trace your Cleanlab interactions.
Installation
First, install the required packages:
Integration Example
Here’s a complete example showing how to integrate Cleanlab TLM with Langtrace:
What Gets Traced
When you integrate Cleanlab TLM with Langtrace, the following information is captured in your traces:
- Service Name: Cleanlab
- Service Type: framework
- Inputs: The prompt or question sent to the model
- Response: The model’s response
- Trustworthiness Score: The score calculated by Cleanlab TLM
- Explanation: The reasoning behind the trustworthiness assessment (when enabled)
Viewing Traces
After running your application with the Cleanlab TLM integration, you can view the traces in the Langtrace dashboard. The traces will show the Cleanlab TLM operations, including:
- The prompt operation
- The trustworthiness score calculation
Each trace will include the inputs, outputs, and metadata from your Cleanlab TLM interactions.
Example Traces
Here’s how Cleanlab TLM traces appear in the Langtrace dashboard:
You can see detailed information about each trace, including the trustworthiness score and explanation:
Additional Configuration
You can customize your Cleanlab TLM configuration by adjusting the options parameter:
For more information about Cleanlab and its capabilities, visit cleanlab.ai.