pythonbeginner
LangChain Prompt Chain (Python)
Build a simple LLMChain with a prompt template and ChatOpenAI in LangChain.
pythonPress ⌘/Ctrl + Shift + C to copy
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
llm = ChatOpenAI(model="gpt-4o", temperature=0)
prompt = ChatPromptTemplate.from_messages([
("system", "You are a senior {role}. Be concise."),
("human", "{question}"),
])
chain = prompt | llm | StrOutputParser()
result = chain.invoke({
"role": "Python developer",
"question": "What is the best way to handle async tasks in Python?",
})
print(result)Use Cases
- prompt chaining
- LLM workflow
- text summarization
Tags
Related Snippets
Similar patterns you can reuse in the same workflow.
typescriptintermediate
OpenAI Chat Completion with Streaming
Stream GPT responses token-by-token using the OpenAI SDK with async iteration.
#openai#streaming
typescriptadvanced
RAG Pipeline (Retrieve + Augment + Generate)
Minimal RAG implementation: embed a query, retrieve top-k chunks, inject into prompt.
#rag#embeddings
typescriptadvanced
LangChain RAG Chain Pipeline
Build a retrieval-augmented generation chain with LangChain using vector store retrieval and prompt templates.
#langchain#rag
typescriptbeginner
Generate Text Embeddings with OpenAI
Create vector embeddings for semantic search and similarity matching using text-embedding-3-small.
#openai#embeddings