KubeAI
Installing Langtrace in KubeAI.
KubeAI is used for deploying LLMs with an OpenAI compatible endpoint. Admins can configure ML models via kind: Model
Kubernetes Custom Resources. KubeAI can be thought of as a Model Operator that manages vLLM and Ollama servers.
In this tutorial you will learn how to deploy KubeAI and Langtrace end-to-end. Both KubeAI and Langtrace are installed in your Kubernetes cluster. No cloud services or external dependencies are required.
- Install Langtrace:
- Install KubeAI:
- Create a local Python environment and install dependencies:
- Expose the KubeAI service to your local port:
- Expose the Langtrace service to your local port:
-
A Langtrace API key is required to use the Langtrace SDK. So lets get one by visiting your self hosted Langtace UI. Open your browser to http://localhost:3000, create a project and get the API keys for your langtrace project. In the Python script below, replace
langtrace_api_key
with your API key. -
Create file named
langtrace-example.py
with the following content:
- Run the Python script:
- Now you should see the trace in your Langtrace UI. Take a look by visiting http://localhost:3000.