Context Window
The amount of text an AI model can remember and process at once, measured in tokens. Bigger windows mean longer conversations and documents.
What is Context Window?
A context window is the working memory of an AI model - how much text it can keep track of while generating a response.
It's measured in tokens (roughly 3/4 of a word each). Claude 3.5 Sonnet has a 200K token window (about 150,000 words). GPT-4 ranges from 8K to 128K depending on the version.
When you hit the limit, the model starts forgetting earlier parts of your conversation or cuts off mid-response. Builders use larger windows for analyzing full codebases, processing long documents, or maintaining context across complex multi-step tasks.
Longer contexts cost more because the model does exponentially more computation. A 100K token input costs roughly 100x more to process than a 1K token input.
Good to Know
How Vibe Coders Use Context Window
Frequently Asked Questions
Your Idea to AI Business In Days
Join Dan, Zehra and 0 others building AI businesses in days with video tutorials and 1 on 1 support.
Related Terms
Instructions that define an AI's behavior, personality, and constraints before it responds to user queries.
The practice of crafting specific instructions to get better outputs from AI models like ChatGPT, Claude, or Gemini.
Reusable instruction packages that teach Claude how to complete specific tasks consistently, loaded dynamically only when needed.
A prompting technique that makes AI models show their reasoning step-by-step, leading to more accurate answers on complex problems.
Autonomous software that observes, decides, and acts to complete tasks without constant human input, using LLMs as their decision-making brain.
Join 0 others building with AI