All guides

Agentic prompting guide

Prompting patterns for agentic predictability, instruction adherence, and tool use.

Intermediate18 min readAug 7, 2025
PromptingAgentsReasoning
Key takeaways
  • Use a layered prompt contract with explicit guardrails.
  • Calibrate agentic eagerness with clear exploration limits.
  • Validate outputs with schema checks and eval loops.

Agentic workflow predictability

Disruptive Rain models can operate anywhere on the autonomy spectrum. Define how proactive the model should be and what it must avoid.

Lower reasoning effort and explicit search limits reduce over exploration and speed up completion times.

  • Set a clear goal and stop criteria for exploration.
  • Declare which tools are allowed for each task.
  • Prefer concise outputs unless deeper analysis is required.

Optimize instruction hierarchy

Use system messages for mission, safety, and output format. Use developer messages for workflow steps and tool rules.

Keep user prompts focused on the task. Avoid mixing safety policy with user instructions so you can test them independently.

Control context gathering

If the agent can over search or over call tools, add a scoped context gathering block. This limits churn and keeps the run lean.

<context_gathering>
Goal: Get enough context fast. Parallelize discovery and stop as soon as you can act.

Method:
- Start broad, then fan out to focused subqueries.
- In parallel, launch varied queries; read top hits per query.
- Avoid repeat searches. Cache and dedupe results.

Early stop criteria:
- You can name exact content to change.
- Top hits converge on the same area.

Escalate once:
- If signals conflict, run one refined batch, then proceed.
</context_gathering>

Close the loop with evals

Treat prompts as code. Run evals whenever you change prompts, tools, or models so regressions surface early.

  • Capture failures and convert them into test cases.
  • Score outputs with schema checks and rubric grading.
  • Track regression trends and roll back quickly.