Langtrace integrates directly with LiteLLM, offering detailed, real-time insights into performance metrics such as cost, token usage, accuracy, and latency.
You’ll need API key from Langtrace. Sign up for Langtrace if you haven’t done so already.*
from litellm import completionresponse = completion( model="gpt-4", messages=[ {"role": "user", "content": "this is a test request, write a short poem"} ],)print(response)
You can now view your traces on the Langtrace dashboard
Want to see more supported methods? Checkout the sample code in the Langtrace Langchain Python Example