Reasoning
The reasoning layer provides structured thinking strategies that go beyond simple LLM completions. Each strategy shapes how the agent breaks down and approaches a task. With 5 built-in strategies and support for custom ones, you can match the reasoning approach to the problem.
Available Strategies
Section titled “Available Strategies”ReAct (Default)
Section titled “ReAct (Default)”A Thought → Action → Observation loop that continues until the agent reaches a final answer. This is the most versatile strategy and the default when reasoning is enabled.
- Think — The agent reasons about the current state
- Act — If needed, emits
ACTION: tool_name({"param": "value"})in JSON format - Observe — The tool is executed via ToolService and the real result is fed back
- Repeat until
FINAL ANSWER:is reached or max iterations hit
Best for: Tasks requiring tool use, multi-step reasoning, and iterative refinement.
import { ReactiveAgents } from "reactive-agents";
const agent = await ReactiveAgents.create() .withProvider("anthropic") .withReasoning() // ReAct strategy by default .withTools() // Built-in tools (web search, file I/O, etc.) .build();
const result = await agent.run("What happened in AI this week?");// ReAct loop: Think → ACTION: web_search({"query":"..."}) → Observe: [real results] → FINAL ANSWERWhen .withTools() is added, the ReAct strategy executes real registered tools and uses their actual results as observations. Tool names are injected into the prompt context so the LLM knows what it can call. Without ToolService, the agent degrades gracefully — returning descriptive messages instead of tool results.
Reflexion
Section titled “Reflexion”A Generate → Self-Critique → Improve loop based on the Reflexion paper (Shinn et al., 2023):
- Generate — Produce an initial response
- Critique — Self-evaluate: identify inaccuracies, gaps, or ambiguities
- Improve — Rewrite using the critique as feedback
- Repeat until
SATISFIED:ormaxRetriesreached
Best for: Quality-critical output — writing, analysis, summarization.
const agent = await ReactiveAgents.create() .withProvider("anthropic") .withReasoning({ defaultStrategy: "reflexion" }) .build();
const result = await agent.run("Write a concise explanation of quantum entanglement");// Generates → Critiques → Improves → Returns polished outputConfiguration:
| Option | Default | Description |
|---|---|---|
maxRetries | 3 | Max generate-critique-improve cycles |
selfCritiqueDepth | ”deep" | "shallow” or “deep” critique |
Trade-off: Reflexion uses more tokens than ReAct (typically 3× per retry cycle) because each cycle requires a generate pass, a critique pass, and an improve pass. The additional cost is usually worth it for tasks where output quality matters more than speed — writing, detailed analysis, or any domain where a first-pass answer is rarely optimal.
Plan-Execute-Reflect
Section titled “Plan-Execute-Reflect”A structured approach that generates a plan first, then executes each step:
- Plan — Generate a numbered list of steps to accomplish the task
- Execute — Work through each step sequentially, using tools if available
- Reflect — Evaluate execution against the original plan
- Refine — If reflection identifies gaps, generate a revised plan and re-execute
Best for: Complex tasks with a clear decomposition — project planning, multi-step research, structured analysis.
const agent = await ReactiveAgents.create() .withProvider("anthropic") .withReasoning({ defaultStrategy: "plan-execute-reflect" }) .withTools() .build();
const result = await agent.run("Compare the GDP growth of the top 5 economies over the last decade");// Plans steps → Executes each → Reflects on completeness → Refines if neededConfiguration:
| Option | Default | Description |
|---|---|---|
maxRefinements | 2 | Max plan revision cycles |
reflectionDepth | ”deep" | "shallow” or “deep” reflection |
Tree-of-Thought
Section titled “Tree-of-Thought”A two-phase plan-then-execute strategy that uses breadth-first tree search to find the best approach, then executes it using real tools:
Phase 1 — Planning (BFS tree search):
- Expand — Generate multiple candidate thoughts, grounded in available tools
- Score — Evaluate each thought’s promise (0.0–1.0)
- Prune — Discard thoughts below
pruningThreshold - Deepen — Expand surviving thoughts further (up to
depthlevels)
Phase 2 — Execution (ReAct loop): 5. Execute — Run a ReAct-style think/act/observe loop guided by the best path, calling real tools
Best for: Complex tasks with multiple valid approaches that also require tool use (GitHub queries, file operations, multi-source research).
const agent = await ReactiveAgents.create() .withProvider("anthropic") .withReasoning({ defaultStrategy: "tree-of-thought" }) .withTools() .build();
const result = await agent.run("Research and summarize recent commits in this repo");// Phase 1: Explores 3 branches × 3 depth levels → Prunes weak ideas → Selects best path// Phase 2: Executes the plan with tool calls → FINAL ANSWERConfiguration:
| Option | Default | Description |
|---|---|---|
breadth | 3 | Candidate thoughts per expansion |
depth | 3 | Maximum tree depth |
pruningThreshold | 0.5 | Minimum score to survive pruning |
Adaptive (Meta-Strategy)
Section titled “Adaptive (Meta-Strategy)”The Adaptive strategy doesn’t reason itself — it analyzes the task and delegates to the best sub-strategy:
- Analyze — Classify the task’s complexity, type, and requirements
- Select — Choose the optimal strategy based on the analysis
- Delegate — Execute the selected strategy
Selection logic:
- Simple Q&A → ReAct
- Quality-critical writing → Reflexion
- Complex multi-step tasks → Plan-Execute-Reflect
- Creative/open-ended → Tree-of-Thought
const agent = await ReactiveAgents.create() .withProvider("anthropic") .withReasoning({ defaultStrategy: "adaptive" }) .withTools() .build();
// Adaptive selects the best strategy per taskawait agent.run("What's 2+2?"); // → Uses ReAct (simple)await agent.run("Write a technical report"); // → Uses Reflexion (quality-critical)await agent.run("Plan a microservices arch"); // → Uses Plan-Execute (complex)Alternatively, enable adaptive routing via the adaptive.enabled flag while keeping a named default:
const agent = await ReactiveAgents.create() .withProvider("anthropic") .withReasoning({ adaptive: { enabled: true } }) .withTools() .build();// Every task is classified and routed to the best strategy automaticallyStrategy Comparison
Section titled “Strategy Comparison”| Strategy | LLM Calls | Best For | Trade-off |
|---|---|---|---|
| ReAct | 1 per iteration | Tool use, step-by-step tasks | Fastest, most versatile |
| Reflexion | 3 per retry cycle | Quality-critical output | Slower, higher quality |
| Plan-Execute | 2+ per plan cycle | Structured multi-step work | Predictable, thorough |
| Tree-of-Thought | 3× breadth × depth + execution | Creative + tool-using tasks | Most thorough: plans then executes |
| Adaptive | 1 + delegated | Mixed workloads | Auto-selects, slight overhead |
Enabling Reasoning
Section titled “Enabling Reasoning”// Default strategy (ReAct)const agent = await ReactiveAgents.create() .withProvider("anthropic") .withReasoning() .build();
// Specific strategyconst agent = await ReactiveAgents.create() .withProvider("anthropic") .withReasoning({ defaultStrategy: "reflexion" }) .build();Custom Strategies
Section titled “Custom Strategies”Register custom reasoning strategies using the StrategyRegistry:
import { StrategyRegistry } from "@reactive-agents/reasoning";import { LLMService } from "@reactive-agents/llm-provider";import { Effect } from "effect";
const registerMyStrategy = Effect.gen(function* () { const registry = yield* StrategyRegistry;
yield* registry.register("my-custom", (input) => Effect.gen(function* () { const llm = yield* LLMService;
const response = yield* llm.complete({ messages: [ { role: "user", content: `${input.taskDescription}\n\nContext: ${input.memoryContext}` }, ], systemPrompt: "You are an expert problem solver.", maxTokens: input.config.strategies.reactive.maxIterations * 500, });
return { strategy: "my-custom", steps: [{ thought: "Custom reasoning", action: "none", observation: response.content }], output: response.content, metadata: { duration: 0, cost: response.usage.estimatedCost, tokensUsed: response.usage.totalTokens, stepsCount: 1, confidence: 0.9, }, status: "completed" as const, }; }), );});Without Reasoning
Section titled “Without Reasoning”When reasoning is not enabled, the agent uses a direct LLM loop:
- Send messages to the LLM
- If the LLM requests tool calls, execute them and append results
- Repeat until the LLM returns a final response (no tool calls)
- Stop when done or max iterations reached
This is faster and cheaper — suitable for simple Q&A, chat, or tasks where structured reasoning isn’t needed.
Tools + Reasoning Integration
Section titled “Tools + Reasoning Integration”When both .withReasoning() and .withTools() are enabled, tools are wired directly into the reasoning loop:
- ToolService is provided to the ReasoningService layer at construction time
- During ReAct, when the LLM emits
ACTION: tool_name(...), the strategy callsToolService.execute()with the parsed arguments - The real tool result becomes the
Observationfed back into the LLM - Available tool names are injected into the reasoning prompt so the LLM knows what’s available
This means agents can genuinely interact with the world during reasoning — search the web, query databases, run calculations — and incorporate real results into their thinking.
All five strategies support tool integration. Tree-of-Thought uses tools in its execution phase (Phase 2), while ReAct, Plan-Execute, and Reflexion use them throughout their loops.