Public Preview

Prompting Foundation Patterns documentation

Prompting Foundation Patterns

Abstract

Principled prompt designs (role/task setup, exemplars, chain-of-thought style hints) that consistently improve reliability across tasks.

Motivation

  • Reduce variance from underspecified instructions
  • Enable decomposition via step-by-step prompting

Architectures

  • System/role priming + task instruction + constraints + examples
  • CoT-style scaffolding (don’t reveal chain-of-thought in outputs for safety)
  • ReAct-like text-only: reasoning tokens with action descriptions sans tool execution

Design Choices

  • Few-shot vs. zero-shot with meta-instructions
  • Output constraints (bullets, JSON, tables) vs. free-form
  • Guardrails: banned topics, safety reminders

Pros/Cons

  • Pros: More reliable outputs, better controllability
  • Cons: Longer prompts increase cost; risk of leaking internal rubrics

Evaluation Metrics

  • Task success rate, rubric adherence
  • Format correctness; post-edit rates

Vendor/Tooling

  • Provider system messages; structured outputs for schemas
  • Prompt caching and templates in SDKs

Design Checklist

  • State role, goal, success criteria
  • Provide canonical examples; specify format
  • Include safety and refusal guidance
  • Constrain decoding and validate outputs

References

  • Title: Best practices for prompting (OpenAI) URL: https://platform.openai.com/docs/guides/prompt-engineering Publisher/Vendor: OpenAI Accessed: 2025-08-14 Version_or_release: provider_reported
  • Title: Anthropic Prompting Guides URL: https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering Publisher/Vendor: Anthropic Accessed: 2025-08-14 Version_or_release: provider_reported