To demystify what AI coding agents are doing under the surface, we can look at this simple diagram by Anthropic:

https://www.anthropic.com/engineering/building-effective-agents

image.png

LLM tools take user input, use their resources such as information retrieval, tools to take action in the external world, memory written down in their filesystem, and output a response.

So when I ask the AI, why does it not give me what I want? What determines the quality of the responses? The rest of the course will be focused on helping you improve your usage of each of these boxes through context engineering.

Usually the problem is your context

<aside> 💡

Whenever you are unsatisfied with your AI, think about what context your best teammates have that the AI does not?

</aside>

Before you open the diagram below, try to answer the question above.

Pre AI, it would take new employees months to onboard, to gather all the unwritten rules, to establish the relationships of who owns what, to know who were the experts in the organization and what the current processes were and how to do things.

We don’t give AI nearly as much leeway to onboard even though we have supercomputers that can re-create most software applications in days with the right prompting.

What does good look like in practice?

image.png

https://www.anthropic.com/engineering/multi-agent-research-system