pythonbeginner
Token Counter with Tiktoken
Count tokens and estimate costs for OpenAI API calls using the tiktoken tokenizer library.
pythonPress ⌘/Ctrl + Shift + C to copy
import tiktoken
def count_tokens(text: str, model: str = "gpt-4o") -> int:
"""Count tokens for a given text and model."""
enc = tiktoken.encoding_for_model(model)
return len(enc.encode(text))
def estimate_cost(
prompt_tokens: int,
completion_tokens: int,
model: str = "gpt-4o",
) -> float:
"""Estimate API cost in USD."""
pricing = {
"gpt-4o": {"input": 2.50, "output": 10.00},
"gpt-4o-mini": {"input": 0.15, "output": 0.60},
}
rates = pricing.get(model, pricing["gpt-4o"])
input_cost = (prompt_tokens / 1_000_000) * rates["input"]
output_cost = (completion_tokens / 1_000_000) * rates["output"]
return round(input_cost + output_cost, 6)
# Usage:
# tokens = count_tokens("Hello, how are you?")
# cost = estimate_cost(prompt_tokens=500, completion_tokens=200)Use Cases
- Cost estimation
- Prompt length validation
- Context window management
Tags
Related Snippets
Similar patterns you can reuse in the same workflow.
typescriptintermediate
OpenAI Chat Completion with Streaming
Stream GPT responses token-by-token using the OpenAI SDK with async iteration.
#openai#streaming
typescriptbeginner
Generate Text Embeddings with OpenAI
Create vector embeddings for semantic search and similarity matching using text-embedding-3-small.
#openai#embeddings
typescriptadvanced
RAG Pipeline (Retrieve + Augment + Generate)
Minimal RAG implementation: embed a query, retrieve top-k chunks, inject into prompt.
#rag#embeddings
typescriptintermediate
OpenAI Tool Calling (Function Calling)
Define tools for GPT to call, parse the response, execute the function, and return results.
#openai#tool-calling