Exploring LangGraph for Agentic Workflows

Gunjan
2 min readJun 22, 2024

--

LanGraph for Agentic Workflows

LangGraph is an open source library designed for building stateful, multi-actor applications with large language models (LLMs). It facilitates the creation of agent and multi-agent workflows by representing these workflows as graphs. This approach seems to have some advantages, including loops and conditionals, controllability, and persistence, which are essential for complex AI agent architectures, also sometimes referred to as Agentic Architectures.

Features of LangGraph

  1. Cycles and Branching — Allows the implementation of loops and conditionals within applications, differentiating it from Directed Acyclic Graph (DAG)-based solutions.
  2. Persistence — Automatically saves the state after each step in the graph, enabling the ability to pause and resume execution. This supports error recovery, human-in-the-loop workflows, and time travel.
  3. Human-in-the-Loop — Integrates human approval or intervention within the workflow, making it possible to interrupt and modify the actions planned by the agent.
  4. Streaming Support — Streams outputs as they are produced by each node, including token-level streaming.
  5. Integration with LangChain — Seamlessly integrates with LangChain and LangSmith, though it can also function independently.

Here is a quick example of how LangGraph can be used to create an agent that interacts with a search API:

pip install -U langchain-community langchain_openai tavily-python
python
from langgraph.graph import END, StateGraph, MessagesState
from langgraph.prebuilt import ToolNode
from langchain_openai import ChatOpenAI
from langchain_community.tools.tavily_search import TavilySearchResults

# Define the tools for the agent to use
tools = [TavilySearchResults(max_results=1)]
tool_node = ToolNode(tools)
model = ChatOpenAI(temperature=0).bind_tools(tools)
# Function to determine the next step
def should_continue(state):
messages = state['messages']
last_message = messages[-1]
if last_message.tool_calls:
return "tools"
return END
# Function to call the model
def call_model(state):
messages = state['messages']
response = model.invoke(messages)
return {"messages": [response]}
# Define the workflow
workflow = StateGraph(MessagesState)
workflow.add_node("agent", call_model)
workflow.add_node("tools", tool_node)
workflow.set_entry_point("agent")
workflow.add_conditional_edges("agent", should_continue)
workflow.add_edge("tools", "agent")
# Initialize memory to persist state between graph runs
checkpointer = MemorySaver()
# Compile the workflow
app = workflow.compile(checkpointer=checkpointer)
# Invoke the graph with input
final_state = app.invoke({"messages": [{"content": "What is the weather in SF?"}]})
print(final_state["messages"][-1].content)

Here the above agent that can search the web and maintain state across interactions. The agent model and tools are defined, and the workflow is set up to handle conditional logic and persistent state.

References:
https://github.com/langchain-ai/langgraph
https://github.com/langchain-ai/langgraph-examplehttps://github.com/menloparklab/LangGraphJourney

--

--