Features
Chat completion
Sending OpenAI Chat Completion Traces
This guide explains how to send OpenTelemetry-compatible traces for OpenAI chat completions to Langtrace using cURL.
Here’s an example of how your traces will appear in the Langtrace UI after following this guide:
Prerequisites
- Langtrace API Key
- OpenAI API Key (for comparison with SDK usage)
Endpoint
Headers
Trace Format
The trace must follow the OpenTelemetry format and include specific attributes for OpenAI chat completions. Note that attributes must use the OpenTelemetry value format with stringValue
or intValue
:
Example cURL Command
Required Attributes
Attribute | Description | Required |
---|---|---|
llm.vendor | Service provider (“openai”) | Yes |
llm.request.model | Model name (e.g., “gpt-3.5-turbo”) | Yes |
llm.request.messages | JSON string of messages array | Yes |
llm.path | API endpoint path | Yes |
llm.request.temperature | Temperature setting | No |
llm.request.max_tokens | Maximum tokens | No |
llm.usage.* | Token usage statistics | No |
Function Calling Example
When using OpenAI function calling, include the tools in the attributes with proper value formatting:
Response Format
Success Response
Error Response
Comparison with SDK Usage
For comparison, here’s how the same trace would be generated using the Python SDK:
Best Practices
- Always include required attributes (vendor, model, messages, path)
- Generate unique trace and span IDs for each request
- Use accurate timestamps for start and end times
- Include usage statistics when available
- Format message arrays and tool calls as proper JSON strings
- Use the OpenTelemetry user agent to ensure proper trace processing
- Include resource attributes with service information
- Use proper OpenTelemetry value format (stringValue/intValue) for attributes