Context Window
The amount of text an AI model can remember and process at once, measured in tokens. Bigger windows mean longer conversations and documents.
What is Context Window?
A context window is the working memory of an AI model - how much text it can keep track of while generating a response.
It's measured in tokens (roughly 3/4 of a word each). Claude 3.5 Sonnet has a 200K token window (about 150,000 words). GPT-4 ranges from 8K to 128K depending on the version.
When you hit the limit, the model starts forgetting earlier parts of your conversation or cuts off mid-response. Builders use larger windows for analyzing full codebases, processing long documents, or maintaining context across complex multi-step tasks.
Longer contexts cost more because the model does exponentially more computation. A 100K token input costs roughly 100x more to process than a 1K token input.
Good to Know
How Vibe Coders Use Context Window
Frequently Asked Questions
Your Idea to AI Business In Days
Join Dan, Zehra and 0 others building AI businesses in days with video tutorials and 1 on 1 support.
Related Terms
Autonomous software that observes, decides, and acts to complete tasks without constant human input, using LLMs as their decision-making brain.
Computer systems that learn from data and perform tasks that typically require human intelligence, like recognizing patterns and making decisions.
Anthropic's mid-tier AI model released September 2025, optimized for coding and agentic tasks with a 200k token context window.
The bite-sized chunks of text that AI models read and generate, like words or word fragments. They're how AI counts and processes language.
A specialized database that stores data as mathematical vectors (embeddings) to enable fast semantic search and AI-powered similarity matching.
Join 0 others building with AI