Langtrace integrates directly with Langchain, offering detailed, real-time insights into performance metrics such as cost, token usage, accuracy, and latency.
# We will be loading a document from the assets folder to create embeddings and query itloader = TextLoader("assets/soccer_rules.txt")docs = loader.load()text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)splits = text_splitter.split_documents(docs)vectorstore = Chroma.from_documents(documents=splits, embedding=OpenAIEmbeddings())# Retrieve and generate using the relevant snippets of the blog.retriever = vectorstore.as_retriever()prompt = hub.pull("rlm/rag-prompt")
Retrieve and generate using the relevant snippets of the document.
defformat_docs(docs):return"\n\n".join(doc.page_content for doc in docs)rag_chain =({"context": retriever | format_docs,"question": RunnablePassthrough()}| prompt| llm| StrOutputParser())rag_chain.invoke("What is Offside??")
You can now view your traces on the Langtrace dashboard