LangChain RAG Chain Pipeline
Build a retrieval-augmented generation chain with LangChain using vector store retrieval and prompt templates.
import { ChatOpenAI } from '@langchain/openai';
import { PromptTemplate } from '@langchain/core/prompts';
import { StringOutputParser } from '@langchain/core/output_parsers';
import { RunnableSequence } from '@langchain/core/runnables';
const model = new ChatOpenAI({ modelName: 'gpt-4o', temperature: 0 });
const ragPrompt = PromptTemplate.fromTemplate(`
Answer the question based only on the provided context.
If the context doesn't contain the answer, say "I don't know."
Context: {context}
Question: {question}
Answer:`);
// Simulated retriever
async function retrieve(query: string): Promise<string> {
// Replace with actual vector store retrieval
return 'Retrieved context documents here...';
}
const chain = RunnableSequence.from([
{
context: (input: { question: string }) => retrieve(input.question),
question: (input: { question: string }) => input.question,
},
ragPrompt,
model,
new StringOutputParser(),
]);
// const answer = await chain.invoke({ question: 'What is RAG?' });Sponsored
Try LangSmith — Debug & Monitor LLM Apps
Use Cases
- Document Q&A
- Knowledge base chatbots
- Semantic search answers
Tags
Related Snippets
Similar patterns you can reuse in the same workflow.
RAG Pipeline (Retrieve + Augment + Generate)
Minimal RAG implementation: embed a query, retrieve top-k chunks, inject into prompt.
LangChain Prompt Chain (Python)
Build a simple LLMChain with a prompt template and ChatOpenAI in LangChain.
OpenAI Chat Completion with Streaming
Stream GPT responses token-by-token using the OpenAI SDK with async iteration.
Claude Messages API (Anthropic SDK)
Send messages to Claude using the official Anthropic SDK with system prompt and user turn.