LLM Agentic Patterns - A Practical Guide
Prompt Engineering Best Practices
Clear Instructions
Be explicit about output format, style, and constraints.
The model cannot infer unstated requirements.
Few-Shot Examples
Example Input: "Analyze sentiment"
Example Output: {"sentiment": "positive", "confidence": 0.92}
Now analyze: [your actual query]
Reduce Hallucination with References
Context: [source documents]
Instructions: Only answer based on the provided context.
If information is not in the context, say "I don't know."
Question: [user query]
Chain-of-Thought Reasoning
Problem: [complex task]
First, work out your own solution step by step.
Then compare it with the expected outcome.
Staged Prompts
Stage 1: Extract key entities → output1
Stage 2: Analyze relationships using output1 → output2
Stage 3: Generate summary using output2 → final result
Prompt Routing
def route_query(user_input):
intent = detect_intent(user_input)
if intent == "simple_faq":
return small_model(user_input)
elif intent == "complex_reasoning":
return large_model(user_input)
Automated Evaluation
## Ground truth
qa_pairs = [
{"question": "...", "expected": "..."},
]
## LLM as judge
for pair in qa_pairs:
generated = model(pair["question"])
score = llm_judge(generated, pair["expected"])
RAG: Retrieval Augmented Generation
Problem: Hallucination, outdated knowledge, no access to proprietary data.
Solution: Ground LLM responses in external data.
Implementation:
## 1. Indexing
chunks = split_documents(documents)
embeddings = embed(chunks)
vector_db.store(embeddings, chunks)
## 2. Query
query_embedding = embed(user_query)
## 3. Retrieval
top_k_chunks = vector_db.nearest_neighbors(query_embedding, k=5)
## 4. Prompt Construction
prompt = f"""
Context: {top_k_chunks}
Question: {user_query}
Answer based only on the context above.
"""
response = llm(prompt)
Agentic Patterns
Agent = LLM + Tools + Memory + Iteration
ReAct (Reason + Act)
Thought: I need current weather to answer this.
Action: call_weather_api(location="San Francisco")
Observation: Temperature is 65°F, sunny
Thought: Now I can provide the answer.
Answer: It's 65°F and sunny in San Francisco.
Planning
def plan_task(high_level_goal):
prompt = f"""
Break down this goal into actionable steps:
Goal: {high_level_goal}
Output format:
Step 1: [action]
Step 2: [action]
...
"""
return llm(prompt)
## Example Output:
## Step 1: Check order status in database
## Step 2: Verify refund policy
## Step 3: Calculate refund amount
## Step 4: Draft customer response
Reflection
## First pass
draft_code = agent.generate_code(problem)
## Self-critique
reflection_prompt = f"""
Code: {draft_code}
Review this code for:
- Bugs
- Edge cases
- Performance issues
Suggest improvements.
"""
critique = llm(reflection_prompt)
## Improved output
final_code = agent.generate_code(problem, critique)
Tool Usage
tools = {
"search": web_search_api,
"calculator": calculate,
"database": query_db,
}
response = llm(prompt)
## LLM outputs: {"tool": "calculator", "args": {"expression": "127 * 83"}}
tool_result = tools[response["tool"]](**response["args"])
## Feed result back
final_answer = llm(f"Using result {tool_result}, answer: {original_question}")
Multi-Agent Collaboration
agents = {
"researcher": Agent(role="Find relevant information"),
"analyzer": Agent(role="Analyze data"),
"writer": Agent(role="Draft final report"),
}
## Workflow
research_output = agents["researcher"].run(task)
analysis_output = agents["analyzer"].run(research_output)
final_report = agents["writer"].run(analysis_output)
Use Cases
Software Development
agent.review_code(pull_request)
agent.diagnose_bug(error_trace)
agent.generate_test(function)
Research
agent.search_papers(topic)
agent.synthesize_findings(papers)
agent.generate_report()
Automation
agent.process_support_ticket()
agent.update_crm()
agent.send_response()
References
- Agentic Design Patterns Part 1: https://www.deeplearning.ai/the-batch/how-agents-can-improve-llm-performance/
- Large Language Model Agents, MOOC Fall 2024: https://llmagents-learning.org/f24
- Natural Language Processing with Deep Learning, 2024: https://web.stanford.edu/class/cs224n
- Building Effective Agents: https://www.anthropic.com/research/building-effective-
- RAG and AI Agents from Deep Learning: https://cs230.stanford.edu/syllabus/fall_2024/rag_agents.pdf
- Tool Use and LLM Agent Basics from Advanced NLP: https://www.phontron.com/class/anlp-fall-2024/assets/slides/anlp-15-tooluse-agent-basics.pdf
- What are AI Agents?: https://www.youtube.com/watch?v=F8NKVhkZZWl
- Frontiers-of-AI-Agents-Tutorial: https://frontiers-of-ai-agents-tutorial.github.io/
Source: Stanford Webinar - Agentic AI: A Progression of Language Model Usage
https://www.youtube.com/watch?v=kJLiOGle3Lw&t=1434s