Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.langtrace.ai/llms.txt

Use this file to discover all available pages before exploring further.

Introduction

Langtrace provides native support for DeepSeek’s family of models. When using DeepSeek models, Langtrace automatically captures essential metrics including:
  • Token usage
  • Cost
  • Latency
  • Model hyperparameters

Prerequisites

Before integrating DeepSeek with Langtrace, ensure you have:
  • A Langtrace account with an API key
  • DeepSeek API credentials
  • Python environment (3.6 or later)

Installation

  1. Install the Langtrace Python SDK:
pip install -U langtrace-python-sdk
  1. Set up your environment variables:
export LANGTRACE_API_KEY=YOUR_LANGTRACE_API_KEY

Integration

Initialize Langtrace before importing any LLM modules:
import os
from langtrace_python_sdk import langtrace  # Must precede any llm module imports
from openai import OpenAI

# Initialize Langtrace
langtrace.init(api_key=os.environ['LANGTRACE_API_KEY'])

# Initialize DeepSeek client
client = OpenAI(
    api_key="<DeepSeek API Key>",
    base_url="https://api.deepseek.com"
)

# Example: Create a chat completion
response = client.chat.completions.create(
    model="deepseek-chat",
    messages=[
        {"role": "system", "content": "You are a helpful assistant"},
        {"role": "user", "content": "Hello"},
    ],
    stream=False
)

print(response.choices[0].message.content)

Viewing Traces

After integration, you can view your DeepSeek model traces in the Langtrace dashboard. The traces will include:
  • Request/response payloads
  • Token usage metrics
  • Cost information
  • Latency measurements
  • Model configuration details

Additional Resources