Generate a simple output with your deployment’s model:
Copy
import osfrom langtrace_python_sdk import langtrace # Must precede any llm module importslangtrace.init(api_key = os.environ['LANGTRACE_API_KEY'])from openai import OpenAIclient = OpenAI( # This is the default and can be omitted api_key=os.environ["XAI_API_KEY"], base_url="https://api.x.ai/v1",)# Generate a simple output with the llama3 MODELchat_completion = client.chat.completions.create( messages=[ { "role": "user", "content": "What is LangChain?", } ], model="grok-beta",)print(chat_completion.choices[0].message.content)
You can now view your traces on the Langtrace dashboard: