#llm
8 snippets tagged with #llm
OpenAI Chat Completion with Streaming
Stream GPT responses token-by-token using the OpenAI SDK with async iteration.
RAG Pipeline (Retrieve + Augment + Generate)
Minimal RAG implementation: embed a query, retrieve top-k chunks, inject into prompt.
Claude Messages API (Anthropic SDK)
Send messages to Claude using the official Anthropic SDK with system prompt and user turn.
LangChain Prompt Chain (Python)
Build a simple LLMChain with a prompt template and ChatOpenAI in LangChain.
LangChain RAG Chain Pipeline
Build a retrieval-augmented generation chain with LangChain using vector store retrieval and prompt templates.
Few-Shot Prompt Template
Build structured few-shot prompts with examples, system instructions, and output format constraints.
AI Agent Loop with Tool Calling
Implement an autonomous agent loop that plans, selects tools, executes actions, and observes results.
AI Guardrails & Safety Pattern
Implement input/output guardrails for LLM applications with content filtering and response validation.