How to group traces

A typical application may have multiple operations that are related to each other. For example, in a RAG workflow, the user’s input is embedded using a model, a semantic search is done on a VectorDB, and the results are again given back to the model to get a natural language response. In such cases, it is useful to group these operations together under a single root span. This allows you to see the entire flow of operations in a single trace.

Installation

Step 1: Install and initialize Langtrace SDK. Refer to the installation guide for more information.

// Must precede any llm module imports
import * as Langtrace from "@langtrase/typescript-sdk";
Langtrace.init({ api_key: LANGTRACE_API_KEY });

Usage

Step 2: Use the with_langtrace_root_span decorator (Python) or WithLangTraceRootSpan (Typescript) function. Example below:

import * as Langtrace from "@langtrase/typescript-sdk";

export async function chatResponse (): Promise<void> {
  await Langtrace.withLangTraceRootSpan(async () => {
    const client = new ChromaClient()
    const collection = await client.getCollection({
      name: 'documentation',
      embeddingFunction: embedder
    })

    const results = await collection.query({
      nResults: 2,
      queryTexts: ['How to setup Langtrace?']
    })

    const openaiclient = new OpenAI()
    return openaiclient.chat.completions.create({
      model: "gpt-4",
      messages: [{ role: "system", content: `Respond in a natural language: ${results}` }],
      stream: false,
    });
  })
}