Agents use LLMs to run tools in a loop to achieve a goal. This is quite different from just providing input text to an LLM and getting output text back from the model, because the tools give it much more power and agency.

image.png

What happens over time as the user uses the agent, is that its context window starts filling up with more tool calls and feedback over time.

This is how the context window starts filling up, as the entire conversation history along with the original system prompt is sent to the LLM after every action, whether it is a tool call, tool response, or user message.

image.png

How we make agents better is by providing specific instructions, knowledge, or access to tools that help the LLM accomplish a specific workflow or task. This is what they call context engineering.

image.png

A lot of coding agent tools have specific instructions already baked in on how to “gather context”, or “formulate plans” before working on a task. This is a slide from Anthropic’s Claude Code course.

CleanShot 2026-02-19 at 11.53.55@2x.png