首页 > 基础资料 博客日记

基于langgraph的智能问答工作流

2026-04-09 16:30:02基础资料围观1

极客资料网推荐基于langgraph的智能问答工作流这篇文章给大家,欢迎收藏极客资料网享受知识的乐趣

LangGraph 工作流项目从零到运行

从零搭一个 LangGraph + MiniMax 的智能问答项目,支持多轮对话、路由分发、Agent 调用,带 LangSmith 追踪。踩了不少坑,记录下来给有需要的人。

背景

想做一个人入口:用户问啥答啥,但内部自动分流。通用问题走通用问答 Agent,代码相关走代码处理 Agent。最关键的是——要支持多轮对话,用户能追问、补充代码、深入聊。

最后折腾出来的方案:用 LangGraph 搭路由 + Agent 组合的工作流,接 MiniMax API,配合 LangSmith 做调用链追踪,State 里用 add_messages reducer 自动累积对话历史。

微信图片_20260409155514_254_13


开工前的准备

初始化项目

mkdir demos && cd demos
uv init

装依赖

uv add langchain-openai langgraph python-dotenv
uv add langchain-core langchain-community

pyproject.toml 最终内容:

[project]
name = "demos"
version = "0.1.0"
requires-python = ">=3.12"
dependencies = [
    "langchain-openai>=1.1.12",
    "langgraph>=1.0.0",
    "python-dotenv>=1.2.2",
]

配置 API Key

根目录新建 .env

MINIMAX_API_KEY = "你的MiniMax密钥"
SMITH_API_KEY = "你的LangSmith密钥"

目录结构

demos/
├── agents/              # Agent 定义
│   ├── __init__.py
│   ├── code_agent.py    # 代码处理 Agent
│   └── prompt_agent.py   # 通用问答 Agent
├── core/                # 核心工具
│   ├── __init__.py
│   ├── llm.py           # LLM 初始化和调用封装
│   └── tracing.py       # LangSmith 配置
├── tools/               # 工具集
│   ├── __init__.py
│   ├── math_tools.py    # 数学计算工具
│   └── search_tools.py  # 搜索工具
├── workflow/            # 工作流
│   ├── nodes/           # 节点实现
│   ├── graph/           # 图定义
│   ├── routes/          # 路由逻辑
│   ├── states/          # 状态定义
│   └── simple_assistant/
└── run_workflow.py      # 入口脚本

核心模块

LLM 封装

core/llm.py 统一管理模型调用:

import os
from langchain_openai import ChatOpenAI

def build_llm() -> ChatOpenAI:
    api_key = os.getenv("MINIMAX_API_KEY")
    return ChatOpenAI(
        model="MiniMax-M2.7",
        base_url="https://api.minimaxi.com/v1",
        api_key=api_key,
        temperature=0.7,
        max_tokens=1000,
        timeout=60,
    )

LangSmith 配置

core/tracing.py 处理调用链追踪:

import os
from dotenv import load_dotenv

def configure_langsmith(project_name: str) -> str:
    load_dotenv()
    api_key = os.getenv("LANGSMITH_API_KEY") or os.getenv("SMITH_API_KEY")
    if api_key:
        os.environ["LANGSMITH_API_KEY"] = api_key
        os.environ["LANGSMITH_TRACING"] = "true"
    os.environ["LANGSMITH_PROJECT"] = project_name
    return project_name

def build_run_config(run_name: str, tags=None, metadata=None):
    return {
        "run_name": run_name,
        "tags": list(tags or []),
        "metadata": dict(metadata or {}),
    }

def extend_run_config(config, run_name=None, tags=None, metadata=None):
    from langchain_core.runnables.config import merge_configs
    extra = {
        "tags": list(tags or []),
        "metadata": dict(metadata or {}),
    }
    if run_name:
        extra["run_name"] = run_name
    return merge_configs(config, extra)

Agent 实现

通用问答 Agent

agents/prompt_agent.pyPromptAgent 类提供 reply() 方法,输入问题字符串返回 dict。核心能力是根据用户输入自动判断是否需要调用计算或搜索工具。

代码处理 Agent

agents/code_agent.pyCodeAgent 类提供 reply()debug_reply() 方法,分别处理普通代码问答和带报错信息的代码调试场景。


工作流设计

State 定义(支持多轮对话的关键)

workflow/states/simple_assistant_state.py

from typing import Annotated, Literal, TypedDict
from langgraph.graph import add_messages

class SimpleAssistantState(TypedDict):
    messages: Annotated[list, add_messages]

    code: str
    error_message: str
    expected_behavior: str
    language: str

    intent: Literal["prompt", "code"]
    route_reason: str

    agent_name: str
    scenario: str
    tool_route: str

重点在 messages: Annotated[list, add_messages]——这行让新消息自动追加到列表,而不是覆盖。add_messages 是 LangGraph 内置的 reducer 函数,专门处理消息合并。

路由逻辑

workflow/routes/simple_assistant_routes.py

from typing import Literal
from langchain_core.messages import BaseMessage
from workflow.states.simple_assistant_state import SimpleAssistantState

CODE_HINT_KEYWORDS = (
    "python", "java", "javascript", "代码", "函数", "类",
    "报错", "异常", "错误", "修复", "debug", "bug",
    "traceback", "stack trace", "review",
)

def _get_latest_user_message(messages: list[BaseMessage]) -> str:
    for msg in reversed(messages):
        if hasattr(msg, "type") and msg.type == "human":
            return msg.content
    return ""

def detect_intent(state: SimpleAssistantState) -> tuple[Literal["prompt", "code"], str]:
    if state.get("code") or state.get("error_message"):
        return "code", "state中包含code或error_message"

    messages = state.get("messages", [])
    user_input = _get_latest_user_message(messages).strip().lower()

    if any(keyword in user_input for keyword in CODE_HINT_KEYWORDS):
        return "code", "命中了编程/报错关键词"

    return "prompt", "默认走通用问答agent"

def route_after_router(state: SimpleAssistantState) -> Literal["prompt_agent_node", "code_agent_node"]:
    if state.get("intent") == "code":
        return "code_agent_node"
    return "prompt_agent_node"

路由逻辑:先看 state 里有没有 code 或 error_message,有的话直接走 code_agent;没有的话从 messages 里提取最新用户输入,命中关键词就走 code_agent,否则走 prompt_agent。

节点实现

workflow/nodes/simple_assistant_nodes.py

from langchain_core.messages import AIMessage, HumanMessage
from langchain_core.runnables import RunnableConfig
from agents.code_agent import code_agent
from agents.prompt_agent import prompt_agent
from core.tracing import extend_run_config
from workflow.routes.simple_assistant_routes import detect_intent
from workflow.states.simple_assistant_state import SimpleAssistantState

def router_node(state: SimpleAssistantState) -> dict:
    intent, route_reason = detect_intent(state)
    return {"intent": intent, "route_reason": route_reason}

def prompt_agent_node(state: SimpleAssistantState, *, config: RunnableConfig) -> dict:
    messages = state.get("messages", [])
    last_human_msg = ""
    for msg in reversed(messages):
        if isinstance(msg, HumanMessage):
            last_human_msg = msg.content
            break

    agent_config = extend_run_config(
        config,
        run_name="prompt_agent_node",
        tags=["node", "prompt_agent_node"],
    )

    try:
        result = prompt_agent.reply(last_human_msg, config=agent_config)
        return {
            "messages": [AIMessage(content=result["answer"])],
            "agent_name": result["agent_name"],
            "answer": result["answer"],
            "thinking": result["thinking"],
            "tool_route": result["tool_route"],
            "scenario": "prompt_answer",
        }
    except Exception as exc:
        return {
            "messages": [AIMessage(content=f"prompt_agent 调用失败:{exc}")],
            "agent_name": "prompt_agent",
            "answer": f"prompt_agent 调用失败:{exc}",
            "thinking": "",
            "tool_route": "general",
            "scenario": "prompt_error",
        }

def code_agent_node(state: SimpleAssistantState, *, config: RunnableConfig) -> dict:
    messages = state.get("messages", [])
    last_human_msg = ""
    for msg in reversed(messages):
        if isinstance(msg, HumanMessage):
            last_human_msg = msg.content
            break

    agent_config = extend_run_config(
        config,
        run_name="code_agent_node",
        tags=["node", "code_agent_node"],
    )

    try:
        if state.get("code") or state.get("error_message"):
            result = code_agent.debug_reply(
                task=last_human_msg,
                code=state.get("code", ""),
                error_message=state.get("error_message", ""),
                expected_behavior=state.get("expected_behavior", ""),
                language=state.get("language", "Python"),
                config=agent_config,
            )
        else:
            result = code_agent.reply(last_human_msg, config=agent_config)
    except Exception as exc:
        return {
            "messages": [AIMessage(content=f"code_agent 调用失败:{exc}")],
            "agent_name": "code_agent",
            "answer": f"code_agent 调用失败:{exc}",
            "thinking": "",
            "scenario": "code_error",
            "tool_route": "general",
        }

    return {
        "messages": [AIMessage(content=result["answer"])],
        "agent_name": result["agent_name"],
        "answer": result["answer"],
        "thinking": result["thinking"],
        "scenario": result["scenario"],
        "tool_route": state.get("tool_route", "general"),
    }

节点从 state 取 messages 列表,拿到最后一条 HumanMessage 传给 Agent 处理。返回值里的 messages 字段会被 LangGraph 自动合并到历史里。

图构建

workflow/graph/simple_assistant_graph.py

from typing import Any
from langgraph.graph import END, START, StateGraph
from workflow.nodes.simple_assistant_nodes import code_agent_node, prompt_agent_node, router_node
from workflow.routes.simple_assistant_routes import route_after_router
from workflow.states.simple_assistant_state import SimpleAssistantState
from core.tracing import build_run_config, configure_langsmith

DEFAULT_WORKFLOW_PROJECT = "demos-simple-assistant"

def build_simple_assistant_graph():
    graph = StateGraph(SimpleAssistantState)

    graph.add_node("router_node", router_node, metadata={"step": "routing"})
    graph.add_node("prompt_agent_node", prompt_agent_node, metadata={"step": "agent", "agent": "prompt_agent"})
    graph.add_node("code_agent_node", code_agent_node, metadata={"step": "agent", "agent": "code_agent"})

    graph.add_edge(START, "router_node")
    graph.add_conditional_edges("router_node", route_after_router)
    graph.add_edge("prompt_agent_node", END)
    graph.add_edge("code_agent_node", END)

    return graph.compile(name="simple_assistant_graph")

app = build_simple_assistant_graph()

def run_simple_assistant(
    messages: list,
    *,
    code: str = "",
    error_message: str = "",
    expected_behavior: str = "",
    language: str = "Python",
    project_name: str = DEFAULT_WORKFLOW_PROJECT,
) -> dict[str, Any]:
    langsmith_project = configure_langsmith(project_name)

    state: SimpleAssistantState = {
        "messages": messages,
        "code": code,
        "error_message": error_message,
        "expected_behavior": expected_behavior,
        "language": language,
    }

    run_config = build_run_config(
        run_name="simple_assistant_run",
        tags=["workflow", "simple_assistant"],
        metadata={
            "workflow": "simple_assistant",
            "langsmith_project": langsmith_project,
            "has_code_context": bool(code or error_message),
        },
    )
    return app.invoke(state, config=run_config)

流程:START → router_node 做分流 → 根据 intent 走 prompt_agent_node 或 code_agent_node → END。


入口脚本

run_workflow.py

import argparse
from langchain_core.messages import HumanMessage
from workflow.graph import app, run_simple_assistant

def print_graph():
    print("\n" + "=" * 60)
    print("Workflow Graph:")
    print("=" * 60)
    graph = app.get_graph()
    print(graph.draw_mermaid())
    print("=" * 60 + "\n")

def main():
    parser = argparse.ArgumentParser(description="Simple Assistant Workflow")
    parser.add_argument("--show-graph", action="store_true", help="Show workflow graph")
    parser.add_argument("--user-input", type=str, help="User input for the workflow")
    parser.add_argument("--code", type=str, default="", help="Code snippet (optional)")
    parser.add_argument("--error", type=str, default="", help="Error message (optional)")
    parser.add_argument("--expected", type=str, default="", help="Expected behavior (optional)")
    parser.add_argument("--language", type=str, default="Python", help="Programming language")

    args = parser.parse_args()

    if args.show_graph:
        print_graph()
        return

    messages = []

    if args.user_input:
        messages.append(HumanMessage(content=args.user_input))
        result = run_simple_assistant(
            messages=messages,
            code=args.code,
            error_message=args.error,
            expected_behavior=args.expected,
            language=args.language,
        )
        messages = result.get("messages", [])
        print("\n" + "=" * 60)
        print("Answer:")
        print("=" * 60)
        if messages and hasattr(messages[-1], "content"):
            print(messages[-1].content)
        print("=" * 60)
    else:
        print("Simple Assistant Workflow CLI (多轮对话模式)")
        print("=" * 60)
        print("输入 exit 或 quit 退出对话")
        print()

        while True:
            user_input = input("你: ").strip()
            if not user_input:
                continue
            if user_input.lower() in ("exit", "quit", "q"):
                print("再见!")
                break

            messages.append(HumanMessage(content=user_input))
            print("思考中...\n")

            result = run_simple_assistant(
                messages=messages,
                code="",
                error_message="",
                expected_behavior="",
                language="Python",
            )
            messages = result.get("messages", [])

            if messages and hasattr(messages[-1], "content"):
                print(f"\n助手: {messages[-1].content}\n")

if __name__ == "__main__":
    main()

支持两种模式:

  • --user-input:单轮问答
  • 不带参数:进入多轮对话循环

怎么跑

看工作流图

uv run python run_workflow.py --show-graph

输出 Mermaid 格式的图结构,复制到 mermaid.live 直接渲染。

单轮问答

uv run python run_workflow.py --user-input "你好,介绍一下自己"

多轮对话

uv run python run_workflow.py

示例:

Simple Assistant Workflow CLI (多轮对话模式)
============================================================
输入 exit 或 quit 退出对话

你: 帮我解释一下什么是闭包
助手: 闭包是...

你: 能举个Python的例子吗
助手: 当然,这里是一个例子...

你: exit
再见!

多轮对话怎么实现的

核心在 State 里的 messages: Annotated[list, add_messages]

add_messages 是 LangGraph 内置的 reducer 函数,作用是告诉 LangGraph:同一字段多次更新时怎么处理合并。默认行为是覆盖,但指定了 reducer 后,LangGraph 会把新消息 append 到列表,而不是整个列表替换掉。

所以流程是这样的:

  1. 用户输入 你好 → messages = [HumanMessage(content="你好")]
  2. 节点返回 {"messages": [AIMessage(content="你好,有什么可以帮你的?")]}
  3. LangGraph 自动合并 → messages = [HumanMessage(...), AIMessage(...)]
  4. 用户追问 你能做什么 → messages = [HumanMessage(...), AIMessage(...), HumanMessage(content="你能做什么")]
  5. 节点看到的是完整历史,可以基于之前的上下文回答

关键点:节点返回值必须包含 messages 字段,LangGraph 才会触发合并逻辑。


项目结构最终版

demos/
├── agents/
│   ├── __init__.py
│   ├── code_agent.py
│   └── prompt_agent.py
├── core/
│   ├── __init__.py
│   ├── llm.py
│   └── tracing.py
├── tools/
│   ├── __init__.py
│   ├── math_tools.py
│   └── search_tools.py
├── workflow/
│   ├── graph/
│   │   ├── __init__.py
│   │   └── simple_assistant_graph.py
│   ├── nodes/
│   │   ├── __init__.py
│   │   └── simple_assistant_nodes.py
│   ├── routes/
│   │   ├── __init__.py
│   │   └── simple_assistant_routes.py
│   ├── states/
│   │   ├── __init__.py
│   │   └── simple_assistant_state.py
│   └── simple_assistant/
│       └── __init__.py
├── .env
├── .gitignore
├── pyproject.toml
└── run_workflow.py

代码已开源:kunyashaw/langgraph-smart-faq-wokflow


文章来源:https://www.cnblogs.com/kunyashaw/p/19841640
本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若内容造成侵权/违法违规/事实不符,请联系邮箱:jacktools123@163.com进行投诉反馈,一经查实,立即删除!

标签:

相关文章

本站推荐

标签云