Skip to main content
Effort levels control model selection and reasoning depth. Use lower effort for fast, simple tasks and higher effort for complex tasks requiring deep reasoning.

Specifying effort

Effort is set using macros—special hashtags that modify agent behavior during a run. Include #mini, #auto, or #max in any of these places:
  • Repository custom instructions — Apply a default effort level to all runs in a repo
  • Trigger-level instructions — Set effort for specific triggers (e.g., use #mini for auto-labeling)
  • GitHub issue body — Set effort when creating an issue
  • GitHub comment — Include the macro when mentioning @pullfrog
More specific locations take precedence. For example, #max in a comment overrides #mini set in repo instructions. If not specified, Pullfrog defaults to #auto.

Effort levels

Best for simple, straightforward tasks. Uses smaller/faster models with reduced reasoning to minimize latency and cost.
Good for most tasks. Balances capability and cost with sensible defaults.
Best for complex tasks requiring deep reasoning, difficult debugging, or intricate planning. Uses the most capable models with maximum reasoning.

Agent implementations

Each agent implements effort levels differently based on their available models and configuration options.

Summary table

Agent#mini#auto#max
Claude CodeHaikuOpus + SonnetOpus
CodexMini model + low reasoningStandard modelMax model + high reasoning
GeminiFlash + low thinkingFlash + high thinkingPro + high thinking
CursorAuto (default)Auto (default)Opus 4.5 Thinking
OpenCodeNo effectNo effectNo effect

Claude Code

Claude Code uses model selection based on effort level.
EffortModelDescription
minihaikuFast, efficient
autoopusplanOpus for planning, Sonnet for execution
maxopusFull Opus capability

Codex (OpenAI)

Codex combines model selection with the modelReasoningEffort parameter.
EffortModelReasoningDescription
minigpt-5.1-codex-minilowSmaller model, reduced reasoning
autogpt-5.1-codexdefaultStandard model, default reasoning
maxgpt-5.1-codex-maxhighLargest model, maximum reasoning

Gemini

Gemini combines model selection with thinking level configuration.
EffortModelThinking LevelDescription
minigemini-3-flash-previewLOWFast model, minimal thinking
autogemini-3-flash-previewHIGHFast model, deep thinking
maxgemini-3-pro-previewHIGHMost capable model, deep thinking

Cursor

Cursor uses model selection via the --model CLI flag.
EffortModelDescription
miniauto (default)Let Cursor select optimal model
autoauto (default)Let Cursor select optimal model
maxopus-4.5-thinkingClaude Opus 4.5 with extended thinking
If your project has a .cursor/cli.json with a model field specified, that model takes precedence over the effort-based selection.

OpenCode

OpenCode does not currently support effort-level configuration. The effort parameter is ignored.
EffortBehavior
miniNo effect
autoNo effect
maxNo effect
To control the model used by OpenCode, use a project-level opencode.json configuration file or set environment variables (e.g., ANTHROPIC_MODEL, OPENAI_MODEL).