Using Langtrace to monitor your OPENAI LLM apps is quick and easy. Follow these steps:

Setup

  1. Install the Langtrace and OpenAI SDKs.

Note: You’ll need API keys from Langtrace and OpenAI. Sign up for Langtrace and/or OpenAI if you haven’t done so already.

Python
# Install the SDK
pip install -U langtrace-python-sdk openai
  1. Setup environment variables:
Shell
export LANGTRACE_API_KEY=YOUR_LANGTRACE_API_KEY
export OPENAI_API_KEY=YOUR_OPEN_AI_API_KEY

Usage

Generate a simple output with your deployment’s model:

  import os
from langtrace_python_sdk import langtrace # Must precede any llm module imports
from openai import OpenAI
langtrace.init(api_key = os.environ['LANGTRACE_API_KEY'])
client = OpenAI(
    # This is the default and can be omitted
    api_key=os.environ.get("OPENAI_API_KEY"),
)

# Generate a simple output with OPEN AI'S GPT 3.5 MODEL
chat_completion = client.chat.completions.create(
    messages=[
        {
            "role": "user",
            "content": "What is LangChain?",
        }
    ],
    model="gpt-3.5-turbo",
)
print(chat_completion.choices[0].message.content)

# Lets also create some embeddings
response = client.embeddings.create(
    input="Your text string goes here",
    model="text-embedding-3-small"
)

print(response.data[0].embedding)

You can now view your traces on the Langtrace dashboard.

Want to see more supported methods? Checkout the sample code in the Langtrace OpenAI Python Example repository.