0%

LangGraph教程

简介

LangGraph 专为希望构建强大且适应性强的 AI 代理的开发者而打造。

特性

  • 可靠性和可控性。通过审核检查和人工审核来引导代理操作。LangGraph会保留长期运行工作流的上下文,确保您的代理保持正轨。
  • 低级且可扩展。使用完全描述性的低级原语构建自定义代理,摆脱限制定制的僵化抽象。设计可扩展的多代理系统,每个代理都可根据您的用例定制特定的角色。
  • 一流的流式传输支持。通过逐个令牌的流式传输和中间步骤的流式传输,LangGraph让用户能够清晰地实时了解代理的推理和操作。

环境搭建

可以参考文章LangChain教程中’环境搭建‘部分

基础教程

构建基础聊天机器人

我们还是从最基础开始,使用langgraph构建一个聊天机器人,实现基本对话

引入AI大模型
1
2
3
4
5
6
7
8
9
10
11
from langchain_deepseek import ChatDeepSeek

from app.config.config import settings

# 初始化DeepSeek模型
chat_model = ChatDeepSeek(
model=settings.DEEPSEEK_MODEL, # 可选模型:deepseek-chat(对话)或deepseek-reasoner(推理)
api_key=settings.DEEPSEEK_API_KEY,
temperature=0.5, # 控制输出随机性(0-1,越高越具创造性)
max_tokens=512
)

我们依赖使用deepseek作为我们的AI大模型

创建StateGraph(状态机)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
from typing import Annotated

from typing_extensions import TypedDict

from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages


class State(TypedDict):
# Messages have the type "list". The `add_messages` function
# in the annotation defines how this state key should be updated
# (in this case, it appends messages to the list, rather than overwriting them)
messages: Annotated[list, add_messages]


graph_builder = StateGraph(State)

可以看到我们创建了一个类State,其继承了TypedDict,定义了一个键为messages、类型为Annotated的属性

我们简单介绍一下TypedDictAnnotated

1
2
3
4
5
6
7
8
9
TypedDict 允许你定义字典应有的键和对应的值类型
提供类型检查器的静态类型信息
这里定义了一个 State 类型,它必须是一个字典且包含 messages 键
messages 的值必须是 list 类型

Annotated 是 Python 类型系统中的一个特殊类型,用于为类型注解附加额外的元数据。
在您提供的代码中,它被用来为 messages 字段添加 LangGraph 特定的行为标记。
形式为 Annotated[<类型>, <元数据1>, <元数据2>, ...]
通过 Annotated 添加了运行时元数据 add_messages(LangGraph的特殊标记)

所以我们知道类State其实是一个类似map的存在,其中有一个键为’messages‘,值的类型为Annotated的键值对;Annotated[list, add_messages]规定了键’messages‘的值为list类型,当更新这个键时,不要直接覆盖 messages 列表,而是使用 add_messages 函数来追加新消息到现有列表中

添加节点
1
2
3
4
5
6
7
8
9
def chatbot(state: State):
# 获取历史对话信息
messages = state["messages"]
# 调用chat_model进行对话
chat_model_res = chat_model.invoke(messages)
# 将返回的对话信息再追加到State对象中
return {"messages": [chat_model_res]}

graph_builder.add_node("chatbot", chatbot)

注意节点函数如何chatbot将当前值State作为输入,并返回一个包含messages键“messages”下更新列表的字典。这是所有 LangGraph 节点函数的基本模式

添加entry

添加一个entry点来告诉图表每次运行时从哪里开始工作:

1
graph_builder.add_edge(START, "chatbot")
添加exit

添加一个exit点来指示图表应该在哪里结束执行

1
graph_builder.add_edge("chatbot", END)
编译图

在运行图表之前,我们需要对其进行编译:

1
graph = graph_builder.compile()
运行聊天机器人

我们需要编写一个类似Java中main函数的方法来运行程序

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
def stream_graph_updates(user_input: str):
message = {"messages": [{"role": "user", "content": user_input}]}

for event in graph.stream(message):
for node, output in event.items():
if output and "messages" in output:
print("Assistant:", output["messages"][-1].content)


while True:
try:
user_input = input("User : ")
if user_input.lower() in ["quit", "exit", "q"]:
print("good bye")
break
stream_graph_updates(user_input)
except Exception as e:
# print(f"Error is : {e}")
user_input = "What do you know about LangGraph?"
print("User: " + user_input)
stream_graph_updates(user_input)
break

我们运行程序,效果如图

image-20250709194845688

为聊天机器人添加工具

我们在上文已经实现了基本的聊天机器人,现在我们来做对话增强,关于对话增强的概念,可以查看文章Langchain4j实现智能助手-’对话功能增强‘;我们首先来添加天气搜索引擎tavily_search

逻辑梳理

之前在图中我们的处理逻辑是一条直线,从起始点START进入节点chatbot,在chatbot中调用llm获取回复信息,然后就来到终点END,逻辑如图所示:

START → chatbot → END

添加了工具后图的调用逻辑:1. 获取输入信息message并构建状态对象state; 2. 进入第一个节点‘chatbot’; 3. 如果消息messages中包含工具调用,则转到节点‘tools’,如果不包含工具调用,则处理后直接结束(END);4. 在节点‘tools’中处理后再返回节点‘chatbot’;5. 经过节点‘chatbot’处理后结束(此时的消息是tools中处理后返回的消息,没有包含工具调用)

逻辑如图所示:

START → chatbot → (有工具调用? tools : END)
↑_________________________________|

引入工具

我们去tavily官网申请api_key后将其保存到配置文件.env

1
TAVILY_API_KEY="tvly-dev-xxx"

引入工具对象

1
2
3
4
5
6
# 读取配置文件'.env'
load_dotenv()
# 引入tavily天气查询(注册https://app.tavily.com/home,申请apikey,将其添加到.env文件)
tavily_search = TavilySearchResults(max_results=1)
# 构建工具数组
tools = [tavily_search]

将AI大模型绑定工具

1
chat_model_with_tools = chat_model.bind_tools(tools)
定义图
1
2
3
4
class State(TypedDict):
messages: Annotated[list, add_messages]

graph_builder = StateGraph(State)
添加节点
1
2
3
4
5
6
7
8
9
10
def chatbot(state: State):
# 获取历史对话信息
messages = state["messages"]
# 调用chat_model进行对话
chat_model_res = chat_model_with_tools.invoke(messages)
# 将返回的对话信息再追加到State对象中
return {"messages": [chat_model_res]}


graph_builder.add_node("chatbot", chatbot)
创建一个节点来运行工具

现在,创建一个函数,用于在调用工具时运行它们。具体方法是将工具添加到一个名为BasicToolNode的新节点,该节点检查状态中的最新消息,并在消息包含tool_calls时调用工具,它依赖于 LLM 的tool_calling支持;代码如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
class BasicToolNode:
"""A node that runs the tools requested in the last AIMessage."""

def __init__(self, tools: list) -> None:
self.tools_by_name = {tool.name: tool for tool in tools}

def __call__(self, inputs: dict):
if messages := inputs.get("messages", []):
message = messages[-1]
else:
raise ValueError("No message found in input")
outputs = []
for tool_call in message.tool_calls:
tool_result = self.tools_by_name[tool_call["name"]].invoke(
tool_call["args"]
)
outputs.append(
ToolMessage(
content=json.dumps(tool_result),
name=tool_call["name"],
tool_call_id=tool_call["id"],
)
)
return {"messages": outputs}


tool_node = BasicToolNode(tools=tools)

graph_builder.add_node("tools", tool_node)
几个python基本语法

这里的代码有些复杂,不要急,我们一点点来看,顺便简单介绍几个python的基本语法

  • 字典推导式(Dictionary Comprehension),用于创建一个字典,通过遍历将列表(list)转化为字典(dict)
1
2
3
4
5
6
7
8
'''
- {key: value for item in iterable} 是字典推导式的标准格式
- 这里tool.name作为字典的键(key)
- tool本身作为字典的值(value)
- for tool in tools 遍历输入的工具列表
'''
def __init__(self, tools: list) -> None:
self.tools_by_name = {tool.name: tool for tool in tools}
  • __call__方法

__call__ 方法是 Python 中的一个特殊方法(魔术方法/双下方法),它允许一个类的实例像函数一样被调用。这是 Python 实现可调用对象(callable objects)的关键机制;它的作用是让对象实例可以像函数一样被调用,在’实例()‘时会自动触发;

1
2
3
4
5
6
7
class MyClass:
def __call__(self, *args, **kwargs):
# 定义调用行为
return ...

obj = MyClass()
obj() # 这会调用 obj.__call__()
  • := 海象运算符(Walrus Operator)

作用:赋值 + 返回值
:= 会先对变量赋值,然后返回赋的值,可以在表达式(如 if、while)中直接使用。

1
2
3
4
5
6
n = len([1, 2, 3])
if n > 2:
print(f"列表长度是 {n}")
================
if (n := len([1, 2, 3])) > 2:
print(f"列表长度是 {n}")
  • dict的get(key, default)方法
1
inputs.get("messages", [])

dict.get(key, default) 是字典的标准方法,第一个参数”messages”是要查找的键,第二个参数 “ [] ” 是当键不存在时返回的默认值

代码解读
  1. 在类BasicToolNode中定义了方法__int__接收工具数组,并将其保存为一个字典(即map);

  2. 在类BasicToolNode中定义了方法__call__用于调用工具

​ call方法的总体逻辑是:(1)从输入的参数(dict)中提取键为messages的list,再list获取最新的message对象(2)设置返回值outputs;(3)遍历message对象中的工具数组(tool_calls),如果工具的名称与init方法中保存的工具名称一致,则调用该工具生成‘tool_result’,并将该结果追加到outputs

  1. 实例化类BasicToolNode,生成对象tool_node
  2. 将对象‘tool_node’添加到图构造器(graph_builder)
使用LangGraph定义好的工具节点

LangGraph 已经有ToolNode,以后我们可以直接使用它,示例如下:

1
2
tool_node = ToolNode(tools)
graph_builder.add_node("tools", tool_node)
定义条件边(conditional_edges

将控制流从一个节点路由到下一个节点。条件边从单个节点开始,通常包含“if”语句,根据当前图状态路由到不同的节点。这些函数接收当前图state并返回一个字符串或字符串列表,指示接下来要调用哪个节点。

接下来,定义一个名为 的路由器函数route_tools,用于检查tool_calls聊天机器人的输出。通过调用 将此函数提供给图add_conditional_edges,这将告知图,每当chatbot节点完成时,检查此函数以确定下一步要去哪里。

如果存在工具调用,则条件将路由到tools,否则路由到END 。由于条件可以返回END,因此您无需明确设置finish_point。代码如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
# 定义条件边
def route_tools(state: State):
if isinstance(state, list):
ai_message = state[-1]
elif messages := state.get("messages", []):
ai_message = messages[-1]
else:
raise ValueError(f"No messages found in input state to tool_edge: {state}")
if hasattr(ai_message, "tool_calls") and len(ai_message.tool_calls) > 0:
return "tools"
return END


graph_builder.add_conditional_edges(
# 源节点名称,表示条件判断的起点
"chatbot",
# 条件判断函数,它应该返回一个键(如"tools"或"END")
route_tools,
# The following dictionary lets you tell the graph to interpret the condition's outputs as a specific node
# It defaults to the identity function, but if you
# want to use a node named something else apart from "tools",
# You can update the value of the dictionary to something else
# e.g., "tools": "my_tools"
# 路由映射字典
{"tools": "tools", END: END}
)

我们可以看到,在函数‘route_tools’中,根据传入的参数获取最新的message对象,之后检查message对象中是否有工具方法(tool_calls),如果有则返回‘tools’参数;

接着在图构造器中添加条件边,添加条件边的函数(add_conditional_edges)中的三个参数为‘source’、‘path’、‘path_map’,即起始节点、路径函数、路径对应字典;我们从‘chatbot’节点开始,通过函数‘route_tools’处理,如果message中有工具处理函数,则路由至‘tools’,否则直接路由到‘END’节点;

使用LangGraph定义好的条件边

LangGraph 已经有tools_condition,以后我们可以直接使用它,示例如下:

1
graph_builder.add_conditional_edges("chatbot", tools_condition)
添加普通边、编译图

我们继续添加普通边‘tools’-‘chatbot’,‘START’-‘chatbot’,然后对图进行编译

1
2
3
4
5
6
7
# 工具执行后的返回路径,确保工具执行结果会再次回到chatbot节点进行处理
# 这是因为我们通过工具获取外部信息后,需要将这些信息返回给AI大模型,让其将信息处理为自然语言的回答
graph_builder.add_edge("tools", "chatbot")
# 这工作流的入口点,确保对话总是从chatbot节点开始
graph_builder.add_edge(START, "chatbot")

graph = graph_builder.compile()
运行聊天机器人

代码见上一节‘运行聊天机器人’,这里不再重复,我们运行程序测试一下

image-20250712131503374

image-20250712131528822

可以看到,程序调用外部工具获取信息,再对其进行处理生成自然语言回答,实现了对话增强;

为聊天机器人添加记忆存储

构建聊天机器人

参考上文构建一个聊天机器人(利用langgraph中的ToolNodetools_condition),代码如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
class State(TypedDict):
messages: Annotated[list, add_messages]

graph_builder = StateGraph(State)

tool = TavilySearch(max_results=2)
tools = [tool]
llm_with_tools = chat_model.bind_tools(tools)

def chatbot(state: State):
return {"messages": [llm_with_tools.invoke(state["messages"])]}

graph_builder.add_node("chatbot", chatbot)

tool_node = ToolNode(tools=[tool])
graph_builder.add_node("tools", tool_node)

graph_builder.add_conditional_edges(
"chatbot",
tools_condition,
)
graph_builder.add_edge("tools", "chatbot")
graph_builder.set_entry_point("chatbot")
创建记忆检查点并编译图
1
2
3
4
# 创建MemorySaver检查点
memory = MemorySaver()
# 编译图
graph = graph_builder.compile(checkpointer=memory)
运行聊天机器人
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
config = {"configurable": {"thread_id": "1"}}

input1 = "Hi there! My name is Will."
message1 = {"messages": [{"role": "user", "content": input1}]}

events1 = graph.stream(message1, config)
for event in events1:
if "messages" in event and event["messages"]: # 检查键存在且不为空
print(event["messages"][-1])

input2 = "Remember my name?"
message2 = {"messages": [{"role": "user", "content": input2}]}

events2 = graph.stream(message2, config)
for event in events2:
if "messages" in event and event["messages"]: # 检查键存在且不为空
print(event["messages"][-1])

我们来看看返回的聊天信息

image-20250712201212435

可以看到,程序已经记住了上下文的对话信息

检查状态

到目前为止,我们已经在两个不同的线程上创建了一些检查点。但是检查点包含哪些内容呢?要随时检查给定配置的图的state,请调用get_state(config)

1
2
snapshot = graph.get_state(config)
snapshot

打印出的信息:

1
StateSnapshot(values={'messages': [HumanMessage(content='Hi there! My name is Will.', additional_kwargs={}, response_metadata={}, id='8c1ca919-c553-4ebf-95d4-b59a2d61e078'), AIMessage(content="Hello Will! It's nice to meet you. How can I assist you today? Is there anything specific you'd like to know or discuss?", additional_kwargs={}, response_metadata={'id': 'msg_01WTQebPhNwmMrmmWojJ9KXJ', 'model': 'claude-3-5-sonnet-20240620', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 405, 'output_tokens': 32}}, id='run-58587b77-8c82-41e6-8a90-d62c444a261d-0', usage_metadata={'input_tokens': 405, 'output_tokens': 32, 'total_tokens': 437}), HumanMessage(content='Remember my name?', additional_kwargs={}, response_metadata={}, id='daba7df6-ad75-4d6b-8057-745881cea1ca'), AIMessage(content="Of course, I remember your name, Will. I always try to pay attention to important details that users share with me. Is there anything else you'd like to talk about or any questions you have? I'm here to help with a wide range of topics or tasks.", additional_kwargs={}, response_metadata={'id': 'msg_01E41KitY74HpENRgXx94vag', 'model': 'claude-3-5-sonnet-20240620', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 444, 'output_tokens': 58}}, id='run-ffeaae5c-4d2d-4ddb-bd59-5d5cbf2a5af8-0', usage_metadata={'input_tokens': 444, 'output_tokens': 58, 'total_tokens': 502})]}, next=(), config={'configurable': {'thread_id': '1', 'checkpoint_ns': '', 'checkpoint_id': '1ef7d06e-93e0-6acc-8004-f2ac846575d2'}}, metadata={'source': 'loop', 'writes': {'chatbot': {'messages': [AIMessage(content="Of course, I remember your name, Will. I always try to pay attention to important details that users share with me. Is there anything else you'd like to talk about or any questions you have? I'm here to help with a wide range of topics or tasks.", additional_kwargs={}, response_metadata={'id': 'msg_01E41KitY74HpENRgXx94vag', 'model': 'claude-3-5-sonnet-20240620', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 444, 'output_tokens': 58}}, id='run-ffeaae5c-4d2d-4ddb-bd59-5d5cbf2a5af8-0', usage_metadata={'input_tokens': 444, 'output_tokens': 58, 'total_tokens': 502})]}}, 'step': 4, 'parents': {}}, created_at='2024-09-27T19:30:10.820758+00:00', parent_config={'configurable': {'thread_id': '1', 'checkpoint_ns': '', 'checkpoint_id': '1ef7d06e-859f-6206-8003-e1bd3c264b8f'}}, tasks=())
1
snapshot.next  # (since the graph ended this turn, `next` is empty. If you fetch a state from within a graph invocation, next tells which node will execute next)

上面的快照包含当前状态值、相应的配置以及要处理的节点next。在我们的例子中,图已达到某个END状态,因此next为空。

为聊天机器人添加人机交互控制

我们构建的代理有时可能不可靠,可能需要人工输入才能成功完成任务。同样,对于某些操作,可能需要在运行前获得人工批准,以确保一切按预期运行。

LangGraph持久层支持人机交互工作流,允许根据用户反馈暂停和恢复执行。此功能的主要接口是interrupt函数。在节点内部调用interrupt函数将暂停程序的执行,可以通过传入命令 (Command)来恢复执行,并接收来自人工的新输入。

构建人机交互控制工具

我们接着上一步的代码,添加实现interrupt的方法,将其作为一个工具放入tools数组,并绑定到AI大模型

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
# 编辑中断函数
@tool
def human_assistance(query: str) -> str:
"""Request assistance from a human."""
# human_response = interrupt({"query": query})
# return human_response["data"]
# 1. 首先触发中断
interrupt({"query": query})

# 2. 获取人工输入
print(f"\n[系统提示] 需要人工协助: {query}\n")
human_response = input("请输入您的回复:")

# 3. 返回人工响应
return human_response

tools = [tavily_search, human_assistance]

# 将工具绑定到llm
chat_model_with_tools = chat_model.bind_tools(tools)
添加运行代码

我们一共运行两次,第一次正常运行,第二次使用Command工具输入信息

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
# 运行程序
user_input = "I need some expert guidance for building an AI agent. Could you request assistance for me?"
config = {"configurable": {"thread_id": "1"}}

events = graph.stream(
{"messages": [{"role": "user", "content": user_input}]},
config,
stream_mode="values",
)
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()

# 恢复执行
human_response = (
"We, the experts are here to help! We'd recommend you check out LangGraph to build your agent."
" It's much more reliable and extensible than simple autonomous agents."
)

human_command = Command(resume={"data": human_response})

events = graph.stream(human_command, config, stream_mode="values")
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()

完整代码如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
# 引入搜索工具
load_dotenv()
tavily_search = TavilySearchResults(max_results=1)


# 编辑中断函数
@tool
def human_assistance(query: str) -> str:
"""Request assistance from a human."""
# human_response = interrupt({"query": query})
# return human_response["data"]
# 1. 首先触发中断
interrupt({"query": query})

# 2. 获取人工输入
print(f"\n[系统提示] 需要人工协助: {query}\n")
human_response = input("请输入您的回复:")

# 3. 返回人工响应
return human_response


tools = [tavily_search, human_assistance]

# 将工具绑定到llm
chat_model_with_tools = chat_model.bind_tools(tools)


# 构建状态类
class State(TypedDict):
messages: Annotated[list, add_messages]


# 构建图的构造器
graph_builder = StateGraph(State)


# 定义chatbot节点并将其添加到图
def chatbot(state: State):
# 获取历史对话信息
messages = state["messages"]
# 用llm调用
response = chat_model_with_tools.invoke(messages)
#
assert (len(response.tool_calls) <= 1)
# 将返回值追加到state对象
res = {"messages": [response]}

return res


graph_builder.add_node("chatbot", chatbot)

# 定义tool_node节点并将其添加到图
tool_node = ToolNode(tools)
graph_builder.add_node("tools", tool_node)

# 添加条件边
graph_builder.add_conditional_edges("chatbot", tools_condition)

# 添加普通边
graph_builder.add_edge("tools", "chatbot")

# 添加起始点
graph_builder.set_entry_point("chatbot")

# 创建MemorySaver检查点
memory = MemorySaver()

# 编译图表
graph = graph_builder.compile(checkpointer=memory)

# 运行程序
user_input = "I need some expert guidance for building an AI agent. Could you request assistance for me?"
config = {"configurable": {"thread_id": "1"}}

events = graph.stream(
{"messages": [{"role": "user", "content": user_input}]},
config,
stream_mode="values",
)
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()

# 恢复执行
human_response = (
"We, the experts are here to help! We'd recommend you check out LangGraph to build your agent."
" It's much more reliable and extensible than simple autonomous agents."
)

human_command = Command(resume={"data": human_response})

events = graph.stream(human_command, config, stream_mode="values")
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()

我们现在来运行程序测试一下

image-20250715105239066

可以看到,程序中断后,根据我们的输入做出了相应的调整;

应用场景

如果说前几个知识点能很轻松的理解到有什么作用,那么学习完人机交互控制,可能很多人还是一头雾水——“这个功能有什么用”;其实在之前的文章Langchain4j实现智能助手-‘实现智能助手’-‘对话功能增强’-‘票务助手’中我们使用过这个功能;具体来说就是用户一开始提供的信息不完整,我们需要程序‘主动出击’,通过不断与用户的交互修正执行过程或者补充信息材料,以正确、准确的完成任务

自定义状态

在本教程中,将向状态添加其他字段,以定义复杂的行为,而无需依赖消息列表。聊天机器人将使用其搜索工具查找特定信息并将其转发给人工审核。

向状态机中添加键

我们需要使用上一步的代码,向状态机中添加两个属性

1
2
3
4
class State(TypedDict):
messages: Annotated[list, add_messages]
name: str
birthday: str

将信息添加到状态中,使得其他图形节点(如存储或处理信息的下游节点)以及图形的持久层可以轻松访问它。

构建更新工具内部状态的工具
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
# 核心流程:
# 首先通过interrupt()中断当前流程,向人工发送验证请求(包含问题和待验证数据)
# 根据人工响应判断信息是否正确:
# 如果响应以"y"开头(表示确认),则保留原始数据
# 否则使用人工提供的新数据
# 构造状态更新对象,包含验证后的数据和工具消息
# 返回Command对象指示框架更新状态
@tool
def human_assistance(name: str, birthday: str, tool_call_id: Annotated[str, InjectedToolCallId]) -> str:
"""Request assistance from a human."""
human_response = interrupt({"question": "Is this correct?", "name": name, "birthday": birthday})
# If the information is correct, update the state as-is.
if human_response.get("correct", "").lower().startswith("y"):
verified_name = name
verified_birthday = birthday
response = "Correct"
# Otherwise, receive information from the human reviewer.
else:
verified_name = human_response.get("name", name)
verified_birthday = human_response.get("birthday", birthday)
response = f"Made a correction: {human_response}"
# This time we explicitly update the state with a ToolMessage inside
# the tool.
state_update = {
"name": verified_name,
"birthday": verified_birthday,
"messages": [ToolMessage(response, tool_call_id=tool_call_id)]
}
# We return a Command object in the tool to update our state.
return Command(update=state_update)
运行聊天机器人

在用户输入中提出两条要求:1. 程序查找LangGraph的发行日期;2. 当找到答案后调用我们上一步编写的工具human_assistance进行检查;

1
2
3
4
5
6
7
8
9
10
11
12
13
14
user_input = (
"Can you look up when LangGraph was released? "
"When you have the answer, use the human_assistance tool for review."
)
config = {"configurable": {"thread_id": "1"}}

events = graph.stream(
{"messages": [{"role": "user", "content": user_input}]},
config,
stream_mode="values",
)
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()

我们运行程序测试

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
================================ Human Message =================================

Can you look up when LangGraph was released? When you have the answer, use the human_assistance tool for review.
================================== Ai Message ==================================

[{'text': "Certainly! I'll start by searching for information about LangGraph's release date using the Tavily search function. Then, I'll use the human_assistance tool for review.", 'type': 'text'}, {'id': 'toolu_01JoXQPgTVJXiuma8xMVwqAi', 'input': {'query': 'LangGraph release date'}, 'name': 'tavily_search_results_json', 'type': 'tool_use'}]
Tool Calls:
tavily_search_results_json (toolu_01JoXQPgTVJXiuma8xMVwqAi)
Call ID: toolu_01JoXQPgTVJXiuma8xMVwqAi
Args:
query: LangGraph release date
================================= Tool Message =================================
Name: tavily_search_results_json

[{"url": "https://blog.langchain.dev/langgraph-cloud/", "content": "We also have a new stable release of LangGraph. By LangChain 6 min read Jun 27, 2024 (Oct '24) Edit: Since the launch of LangGraph Platform, we now have multiple deployment options alongside LangGraph Studio - which now fall under LangGraph Platform. LangGraph Platform is synonymous with our Cloud SaaS deployment option."}, {"url": "https://changelog.langchain.com/announcements/langgraph-cloud-deploy-at-scale-monitor-carefully-iterate-boldly", "content": "LangChain - Changelog | ☁ 🚀 LangGraph Platform: Deploy at scale, monitor LangChain LangSmith LangGraph LangChain LangSmith LangGraph LangChain LangSmith LangGraph LangChain Changelog Sign up for our newsletter to stay up to date DATE: The LangChain Team LangGraph LangGraph Platform ☁ 🚀 LangGraph Platform: Deploy at scale, monitor carefully, iterate boldly DATE: June 27, 2024 AUTHOR: The LangChain Team LangGraph Platform is now in closed beta, offering scalable, fault-tolerant deployment for LangGraph agents. LangGraph Platform also includes a new playground-like studio for debugging agent failure modes and quick iteration: Join the waitlist today for LangGraph Platform. And to learn more, read our blog post announcement or check out our docs. Subscribe By clicking subscribe, you accept our privacy policy and terms and conditions."}]
================================== Ai Message ==================================

[{'text': "Based on the search results, it appears that LangGraph was already in existence before June 27, 2024, when LangGraph Platform was announced. However, the search results don't provide a specific release date for the original LangGraph. \n\nGiven this information, I'll use the human_assistance tool to review and potentially provide more accurate information about LangGraph's initial release date.", 'type': 'text'}, {'id': 'toolu_01JDQAV7nPqMkHHhNs3j3XoN', 'input': {'name': 'Assistant', 'birthday': '2023-01-01'}, 'name': 'human_assistance', 'type': 'tool_use'}]
Tool Calls:
human_assistance (toolu_01JDQAV7nPqMkHHhNs3j3XoN)
Call ID: toolu_01JDQAV7nPqMkHHhNs3j3XoN
Args:
name: Assistant
birthday: 2023-01-01

可以看到程序虽然调用了搜索工具tavily,但是没有找到确切的结果,现在我们需要在中断过程中将正确结果告诉程序

1
2
3
4
5
6
7
8
9
10
11
human_command = Command(
resume={
"name": "LangGraph",
"birthday": "Jan 17, 2024",
},
)

events = graph.stream(human_command, config, stream_mode="values")
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()

我们来运行一下程序

image-20250715152617835

可以看到现在程序的回复使用的是我们提供的数据

时间旅行

在典型的聊天机器人工作流程中,用户需要与机器人进行一次或多次交互才能完成一项任务。记忆人机交互机制会在图形状态中启用检查点,并控制未来的响应。

如果您希望用户能够从之前的回复开始,探索不同的结果,该怎么办?或者,如果您希望用户能够回放聊天机器人的工作以修复错误或尝试不同的策略(这在自主软件工程师等应用中很常见),该怎么办?

您可以使用 LangGraph 的内置时间旅行功能来创建这些类型的体验。

构建聊天机器人

我们还是先复制一份聊天机器人的代码

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
class State(TypedDict):
messages: Annotated[list, add_messages]


graph_builder = StateGraph(State)

tool = TavilySearchResults(max_results=2)
tools = [tool]
llm_with_tools = chat_model.bind_tools(tools)


def chatbot(state: State):
return {"messages": [llm_with_tools.invoke(state["messages"])]}


graph_builder.add_node("chatbot", chatbot)

tool_node = ToolNode(tools=[tool])
graph_builder.add_node("tools", tool_node)

graph_builder.add_conditional_edges(
"chatbot",
tools_condition,
)
graph_builder.add_edge("tools", "chatbot")
graph_builder.add_edge(START, "chatbot")

memory = MemorySaver()
graph = graph_builder.compile(checkpointer=memory)
添加步骤

添加运行聊天机器人的代码

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
config = {"configurable": {"thread_id": "1"}}
events = graph.stream(
{
"messages": [
{
"role": "user",
"content": (
"I'm learning LangGraph. "
"Could you do some research on it for me?"
),
},
],
},
config,
stream_mode="values",
)
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()

events = graph.stream(
{
"messages": [
{
"role": "user",
"content": (
"Ya that's helpful. Maybe I'll "
"build an autonomous agent with it!"
),
},
],
},
config,
stream_mode="values",
)
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()
重放完整的状态历史记录

我们可以通过replay来查看发生的所有事情,即完整的状态历史记录

1
2
3
4
5
6
7
to_replay = None
for state in graph.get_state_history(config):
print("Num Messages: ", len(state.values["messages"]), "Next: ", state.next)
print("-" * 80)
if len(state.values["messages"]) == 6:
# We are somewhat arbitrarily selecting a specific state based on the number of chat messages in the state.
to_replay = state
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
Num Messages:  8 Next:  ()
--------------------------------------------------------------------------------
Num Messages: 7 Next: ('chatbot',)
--------------------------------------------------------------------------------
Num Messages: 6 Next: ('tools',)
--------------------------------------------------------------------------------
Num Messages: 5 Next: ('chatbot',)
--------------------------------------------------------------------------------
Num Messages: 4 Next: ('__start__',)
--------------------------------------------------------------------------------
Num Messages: 4 Next: ()
--------------------------------------------------------------------------------
Num Messages: 3 Next: ('chatbot',)
--------------------------------------------------------------------------------
Num Messages: 2 Next: ('tools',)
--------------------------------------------------------------------------------
Num Messages: 1 Next: ('chatbot',)
--------------------------------------------------------------------------------
Num Messages: 0 Next: ('__start__',)
--------------------------------------------------------------------------------
从检查点恢复

to_replay从第二次图形调用中的节点之后的状态恢复chatbot。从此点恢复将调用接下来的操作节点。

1
2
('tools',)
{'configurable': {'thread_id': '1', 'checkpoint_ns': '', 'checkpoint_id': '1efd43e3-0c1f-6c4e-8006-891877d65740'}}
加载某个时刻的状态

检查点to_replay.config包含一个checkpoint_id时间戳。提供此checkpoint_id值会告诉 LangGraph 的检查点程序从该时刻开始加载状态。

1
2
3
4
# The `checkpoint_id` in the `to_replay.config` corresponds to a state we've persisted to our checkpointer.
for event in graph.stream(None, to_replay.config, stream_mode="values"):
if "messages" in event:
event["messages"][-1].pretty_print()

运行程序

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
================================== Ai Message ==================================

[{'text': "That's an exciting idea! Building an autonomous agent with LangGraph is indeed a great application of this technology. LangGraph is particularly well-suited for creating complex, multi-step AI workflows, which is perfect for autonomous agents. Let me gather some more specific information about using LangGraph for building autonomous agents.", 'type': 'text'}, {'id': 'toolu_01QWNHhUaeeWcGXvA4eHT7Zo', 'input': {'query': 'Building autonomous agents with LangGraph examples and tutorials'}, 'name': 'tavily_search_results_json', 'type': 'tool_use'}]
Tool Calls:
tavily_search_results_json (toolu_01QWNHhUaeeWcGXvA4eHT7Zo)
Call ID: toolu_01QWNHhUaeeWcGXvA4eHT7Zo
Args:
query: Building autonomous agents with LangGraph examples and tutorials
================================= Tool Message =================================
Name: tavily_search_results_json

[{"url": "https://towardsdatascience.com/building-autonomous-multi-tool-agents-with-gemini-2-0-and-langgraph-ad3d7bd5e79d", "content": "Building Autonomous Multi-Tool Agents with Gemini 2.0 and LangGraph | by Youness Mansar | Jan, 2025 | Towards Data Science Building Autonomous Multi-Tool Agents with Gemini 2.0 and LangGraph A practical tutorial with full code examples for building and running multi-tool agents Towards Data Science LLMs are remarkable — they can memorize vast amounts of information, answer general knowledge questions, write code, generate stories, and even fix your grammar. In this tutorial, we are going to build a simple LLM agent that is equipped with four tools that it can use to answer a user’s question. This Agent will have the following specifications: Follow Published in Towards Data Science --------------------------------- Your home for data science and AI. Follow Follow Follow"}, {"url": "https://github.com/anmolaman20/Tools_and_Agents", "content": "GitHub - anmolaman20/Tools_and_Agents: This repository provides resources for building AI agents using Langchain and Langgraph. This repository provides resources for building AI agents using Langchain and Langgraph. This repository provides resources for building AI agents using Langchain and Langgraph. This repository serves as a comprehensive guide for building AI-powered agents using Langchain and Langgraph. It provides hands-on examples, practical tutorials, and resources for developers and AI enthusiasts to master building intelligent systems and workflows. AI Agent Development: Gain insights into creating intelligent systems that think, reason, and adapt in real time. This repository is ideal for AI practitioners, developers exploring language models, or anyone interested in building intelligent systems. This repository provides resources for building AI agents using Langchain and Langgraph."}]
================================== Ai Message ==================================

Great idea! Building an autonomous agent with LangGraph is indeed an excellent way to apply and deepen your understanding of the technology. Based on the search results, I can provide you with some insights and resources to help you get started:

1. Multi-Tool Agents:
LangGraph is well-suited for building autonomous agents that can use multiple tools. This allows your agent to have a variety of capabilities and choose the appropriate tool based on the task at hand.

2. Integration with Large Language Models (LLMs):
There's a tutorial that specifically mentions using Gemini 2.0 (Google's LLM) with LangGraph to build autonomous agents. This suggests that LangGraph can be integrated with various LLMs, giving you flexibility in choosing the language model that best fits your needs.

3. Practical Tutorials:
There are tutorials available that provide full code examples for building and running multi-tool agents. These can be invaluable as you start your project, giving you a concrete starting point and demonstrating best practices.
...

Remember, building an autonomous agent is an iterative process. Start simple and gradually increase complexity as you become more comfortable with LangGraph and its capabilities.

Would you like more information on any specific aspect of building your autonomous agent with LangGraph?
Output is truncated. View as a scrollable element or open in a text editor. Adjust cell output settings...

图表从action节点恢复执行。您可以判断出情况确实如此,因为上面打印的第一个值是来自我们的搜索引擎工具的响应。

恭喜!您现在已经在 LangGraph 中使用了时间旅行检查点遍历。能够回溯并探索其他路径,为调试、实验和交互式应用程序开辟了无限可能。