Salta al contenuto principale
All notes
2026 · 05 · 5 min

Prompt engineering for agentic coding: structure beats prose

Agents reward structured prompts: goal, constraints, output format, scope boundary. Free-form prose underspecifies the problem and the agent fills the gaps with whatever sounds plausible.

Prompt engineering for chat is different from prompt engineering for agents. With chat, you can clarify in turn two. With an agent that's about to write across ten files, turn two is too late. The prompt is the contract.

The structure I default to: a one-line goal, a bullet list of constraints, a sentence on what's out of scope, and a sentence on what the output should look like. That's it. No paragraphs of context the agent will skim, no apologetic framing, no `please`. Agents reward terseness and punish ambiguity.

Constraints are where most prompts underspecify. 'Don't add tests' is more useful than it looks because the model's default is to add tests; 'edit only files matching X' prevents scope creep; 'fail loudly instead of falling back silently' prevents the silent-error pattern that's the failure mode of LLM-written code.

The other shift is what to leave out. Don't paste the whole file when the agent can read it. Don't explain the codebase when CLAUDE.md is already in context. Don't restate the goal three times. The token you save on filler is a token the model spends on actually thinking about your problem.

WRITTEN BY
Ibrahim Aly
SENIOR FS ENGINEER · BERLIN ↔ CAIRO