π Lesson 2-2: Prompt & Tool Chaining
Learning Objectives
By the end of this lesson you will be able to:
- Chain multiple prompts and tools into conditional, multi-step workflows
- Design branching logic so the agent dynamically selects different sub-chains based on intermediate results
- Abstract provider details (LLM or vector store) to enable seamless swapping without code changes
- Construct prompts at runtime that adapt to context, tool output, and policy constraints
- Implement error handling, graceful degradation, and compensating actions within chained pipelines
π― 1. Introduction: Why Prompt & Tool Chaining?
Monolithic prompts grow brittle as task complexity increases. Chaining breaks a complex workflow into modular stepsβeach a prompt or tool callβimproving maintainability, reusability, and clarity. Conditional branching enables the agent to adapt at runtime, executing only relevant sub-chains.
Benefits of Chaining
- Modularity: Break complex tasks into manageable pieces
- Reusability: Individual chains can be reused across different workflows
- Maintainability: Easier to debug and update individual components
- Adaptability: Dynamic routing based on intermediate results
π³ 2. Branching Logic for Agents
2.1 Conditional Chains
Agents inspect intermediate outputs and choose the next sub-chain:
Conditional Chain Example
Use Case
Respond differently to positive vs. negative customer feedback.
2.2 Dynamic Routing
Implement a dispatcher that reads an "intent" or numeric score and invokes the appropriate chain:
Dynamic Routing Implementation
Benefit
New branches can be added by extending chain_map
, without altering core loop logic.
π 3. Multi-Step Prompt Chaining
3.1 Sequential Prompts
Feed the output of one LLM call into the next:
Sequential Chain Steps
- Extract entities
- Summarize findings
- Draft action items
3.2 Nested Prompts
Embed a sub-chain's final output into a higher-level template:
Nested Prompt Template
Advantage
Ensures context continuity and reduces prompt length.
π οΈ 4. Tool Chain Composition with LangChain
4.1 SequentialChain
vs. LLMChain
Chain Types
- LLMChain: Single prompt β LLM call
- SequentialChain: A list of
LLMChain
steps executed in sequence, passing outputs along
SequentialChain Implementation
4.2 Custom Chains with ChainOfThoughtChain
ChainOfThoughtChain
- Integrates reasoning steps and tool calls in one chain, preserving CoT context
π 5. Provider Abstraction & Pluggability
5.1 Abstracting LLM Calls
Wrap model invocation behind a uniform interface:
Model Bridge Abstraction
Provider Switching
Switch clients by injecting different client
implementations (OpenAI, Anthropic, Hugging Face).
5.2 Abstracting Retrieval Chains
Define a RetrieverInterface
so you can swap FAISS, Chroma, or an API-based retriever:
Retriever Interface
Implementation Flexibility
Implement and inject concrete retrievers without changing downstream logic.
π‘οΈ 6. Error Handling in Chained Workflows
6.1 Graceful Degradation
If a tool or chain fails, return a safe default and continue:
Graceful Degradation
6.2 Compensating Actions
On critical failure, trigger a corrective sub-chain (e.g., notify human or rollback):
Compensating Actions
π» 7. Mini-Project: Customer Feedback Workflow
Customer Feedback Workflow Challenge
Task: Build a chained workflow to process and respond to customer reviews:
- Sentiment Analysis Chain: Classify review as "negative," "neutral," or "positive."
- Branching Chains:
- Negative β apology_chain (draft apology email)
- Neutral β info_chain (provide more information)
- Positive β thank_you_chain (draft thank-you email)
- Follow-Up Survey Chain: Always send a survey prompt via a simulated API.
- Logging: Record each chain invocation, inputs, and outputs to
feedback_workflow.log
.
Use LangChain's SequentialChain
and custom conditionals. Demonstrate graceful degradation and compensating actions if any chain step throws an exception.
β 8. Self-Check Questions
Knowledge Check
- How do conditional chains improve agent adaptability compared to static pipelines?
- Why is provider abstraction critical for maintaining a multi-chain workflow?
- Describe a scenario requiring a compensating action in a chained agent.
- How would you implement a fallback for a failed chain step without stopping the entire workflow?
π§ Navigation
Next Up
Lesson 2-3: Hybrid RAG & Context β
Lesson 2-3 will explore Hybrid RAG & Context Management, combining graph-based retrieval, vector search, and long-context strategies for robust knowledge grounding.