LLM Frameworks
LiteLLM
Langtrace and LiteLLM Integration Guide
Langtrace integrates directly with LiteLLM, offering detailed, real-time insights into performance metrics such as cost, token usage, accuracy, and latency.
You’ll need API key from Langtrace. Sign up for Langtrace if you haven’t done so already.*
LiteLLM SDK
- Setup environment variables:
Shell
export LANGTRACE_API_KEY=YOUR_LANGTRACE_API_KEY
- Add callback to your LiteLLM client
main.py
import litellm
litellm.success_callback = ['langtrace']
- Use LiteLLM completion
main.py
from litellm import completion
response = completion(
model="gpt-4",
messages=[
{"role": "user", "content": "this is a test request, write a short poem"}
],
)
print(response)
LiteLLM Proxy
- Create
config.yaml
:
config.yaml
model_list:
- model_name: gpt-4
litellm_params:
model: openai/gpt-4
litellm_settings:
callbacks: ["langtrace"]
environment_variables:
LANGTRACE_API_KEY: <YOUR_LANGTRACE_API_KEY>
- Run LiteLLM Proxy
Shell
litellm --config config.yaml --detailed_debug
- Test your setup
You can now view your traces on the Langtrace dashboard
Want to see more supported methods? Checkout the sample code in the Langtrace Langchain Python Example