Building Your First AI Agent: A Practical LangChain Guide
AI agents represent the next evolution in how we interact with large language models. Unlike simple chatbots, agents can use tools, make decisions, and complete multi-step tasks autonomously. In this guide, we’ll build a functional AI agent using LangChain.
What You’ll Build
A research assistant agent that can:
- Search the web for information
- Perform calculations
- Summarize findings
- Answer complex questions using multiple tools
Prerequisites
- Python 3.9+
- OpenAI API key
- Basic Python knowledge
Setup
1. Create Virtual Environment
python -m venv agent-env
source agent-env/bin/activate # On Windows: agent-env\Scripts\activate
2. Install Dependencies
pip install langchain langchain-openai langchain-community
3. Set API Key
export OPENAI_API_KEY="your-key-here"
Understanding Agents
Before diving into code, let’s understand the core concepts:
Tools
Functions the agent can call:
- Web search
- Calculator
- Database queries
- API calls
Chains
Sequences of operations:
- Input → Processing → Output
- Can include LLM calls, transformations, tool usage
Agents
The decision-maker that:
- Analyzes the task
- Selects appropriate tools
- Orchestrates the workflow
Building Your First Agent
Step 1: Define Tools
from langchain.tools import Tool
from langchain_community.utilities import SerpAPIWrapper
from langchain.chains import LLMMathChain
from langchain_openai import OpenAI
# Web search tool
search = SerpAPIWrapper()
search_tool = Tool(
name="web_search",
func=search.run,
description="Useful for searching current information on the internet"
)
# Calculator tool
llm = OpenAI(temperature=0)
llm_math = LLMMathChain(llm=llm)
math_tool = Tool(
name="calculator",
func=llm_math.run,
description="Useful for performing mathematical calculations"
)
tools = [search_tool, math_tool]
Step 2: Create the Agent
from langchain.agents import initialize_agent, AgentType
from langchain_openai import ChatOpenAI
# Initialize the language model
llm = ChatOpenAI(temperature=0, model="gpt-4")
# Create the agent
agent = initialize_agent(
tools=tools,
llm=llm,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True
)
Step 3: Run the Agent
# Simple query
response = agent.run("What is the population of Tokyo divided by the population of New York?")
print(response)
The agent will:
- Recognize it needs two pieces of information (populations)
- Use web search to find both
- Use the calculator to divide them
- Return the answer
Building a More Advanced Agent
Let’s create a research assistant with memory:
from langchain.memory import ConversationBufferMemory
from langchain.agents import AgentExecutor
# Add memory
memory = ConversationBufferMemory(memory_key="chat_history")
# Create agent with memory
agent = initialize_agent(
tools=tools,
llm=llm,
agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION,
memory=memory,
verbose=True
)
# Multi-turn conversation
print(agent.run("What are the latest developments in fusion energy?"))
print(agent.run("What companies are leading in this field?")) # Remembers context
print(agent.run("Compare their funding levels")) # Builds on previous answers
Creating Custom Tools
You can create tools for any functionality:
from langchain.tools import BaseTool
from typing import Type
class WeatherTool(BaseTool):
name = "weather_lookup"
description = "Get current weather for a city"
def _run(self, city: str):
# Integration with weather API
return f"Weather in {city}: 72°F, Sunny"
def _arun(self, city: str):
raise NotImplementedError("Async not implemented")
# Add to tools
tools.append(WeatherTool())
Agent Types Explained
| Agent Type | Best For | Description |
|---|---|---|
| ZERO_SHOT_REACT | Simple tasks | One-shot reasoning |
| CONVERSATIONAL_REACT | Chat interfaces | Maintains conversation history |
| STRUCTURED_CHAT | Complex outputs | Handles structured responses |
| OPENAI_FUNCTIONS | OpenAI models | Uses native function calling |
| PLAN_AND_EXECUTE | Multi-step tasks | Plans before acting |
Deployment Options
1. CLI Application
# interactive_agent.py
while True:
query = input("\nAsk me anything (or 'quit'): ")
if query.lower() == 'quit':
break
response = agent.run(query)
print(f"\nAgent: {response}")
2. FastAPI Web Service
from fastapi import FastAPI
from pydantic import BaseModel
app = FastAPI()
class Query(BaseModel):
question: str
@app.post("/ask")
async def ask_agent(query: Query):
response = agent.run(query.question)
return {"answer": response}
# Run: uvicorn main:app --reload
3. Streamlit UI
import streamlit as st
st.title("AI Research Assistant")
query = st.text_input("Ask a question:")
if query:
with st.spinner("Researching..."):
response = agent.run(query)
st.write(response)
Error Handling and Robustness
from langchain.utilities import APIError
def safe_agent_run(agent, query, max_retries=3):
for attempt in range(max_retries):
try:
return agent.run(query)
except Exception as e:
if attempt == max_retries - 1:
return f"Error: {str(e)}"
print(f"Attempt {attempt + 1} failed, retrying...")
Monitoring and Logging
import logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
# Add callbacks
from langchain.callbacks import StdOutCallbackHandler
agent = initialize_agent(
tools=tools,
llm=llm,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
callbacks=[StdOutCallbackHandler()],
verbose=True
)
Common Issues and Solutions
Issue: Agent loops infinitely
Solution: Set max_iterations parameter
agent = initialize_agent(
tools=tools,
llm=llm,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
max_iterations=5,
early_stopping_method="generate"
)
Issue: Agent selects wrong tool
Solution: Improve tool descriptions
tool = Tool(
name="specific_name",
func=function,
description="Very specific description of when to use this tool"
)
Issue: API rate limits
Solution: Add retry logic with exponential backoff
from tenacity import retry, stop_after_attempt, wait_exponential
@retry(stop=stop_after_attempt(3), wait=wait_exponential(multiplier=1, min=4, max=10))
def call_agent(query):
return agent.run(query)
Next Steps
After mastering basics:
- Add more tools: Database connections, API integrations
- Implement RAG: Connect to your knowledge base
- Multi-agent systems: Specialized agents working together
- Production deployment: Docker, Kubernetes, monitoring
Resources
Want to learn more? Check out our AI development guides and tools directory.