跳转至

LangChain链(LCEL)

LCEL(LangChain Expression Language)是 LangChain 提供的链式调用语法,核心是管道符 |,将多个组件串联成一个可执行的链。

1. 最基础的链

from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser

load_dotenv()

prompt = ChatPromptTemplate.from_messages([
    ("system", "你是一位翻译助手。"),
    ("human", "请将以下内容翻译成{language}{text}"),
])

llm = ChatOpenAI(model="gpt-4o-mini")
parser = StrOutputParser()

# 用 | 将三个组件串联
chain = prompt | llm | parser

result = chain.invoke({
    "language": "英文",
    "text": "人工智能正在改变世界。",
})
print(result)  # Artificial intelligence is changing the world.

数据流向:invoke() 的输入 → prompt 格式化为消息 → llm 生成回复 → parser 转为字符串

2. 链的执行方法

所有链都支持同一套执行接口:

chain = prompt | llm | parser

# 单次调用
result = chain.invoke({"language": "英文", "text": "你好"})

# 批量调用(并发)
results = chain.batch([
    {"language": "英文", "text": "你好"},
    {"language": "日文", "text": "你好"},
    {"language": "法文", "text": "你好"},
])

# 流式输出
for chunk in chain.stream({"language": "英文", "text": "写一首短诗"}):
    print(chunk, end="", flush=True)

# 异步调用
import asyncio
result = asyncio.run(chain.ainvoke({"language": "英文", "text": "你好"}))

3. RunnablePassthrough — 透传数据

RunnablePassthrough 将输入原样传递,常用于在链中携带原始数据:

from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables import RunnablePassthrough

load_dotenv()

prompt = ChatPromptTemplate.from_messages([
    ("human", "用一句话总结:{text}"),
])

llm = ChatOpenAI(model="gpt-4o-mini")
parser = StrOutputParser()

chain = (
    {"text": RunnablePassthrough()}  # 将原始输入传给 text 字段
    | prompt
    | llm
    | parser
)

result = chain.invoke("Python 是一种解释型、面向对象的高级编程语言,由 Guido van Rossum 于 1991 年创建。")
print(result)

4. RunnableParallel — 并行执行

RunnableParallel 可以同时运行多条支链,将结果合并为字典:

from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables import RunnableParallel, RunnablePassthrough

load_dotenv()

llm = ChatOpenAI(model="gpt-4o-mini")
parser = StrOutputParser()

summary_prompt = ChatPromptTemplate.from_messages([
    ("human", "用一句话总结这段文字:{text}"),
])

translate_prompt = ChatPromptTemplate.from_messages([
    ("human", "将这段文字翻译成英文:{text}"),
])

parallel_chain = RunnableParallel({
    "summary": summary_prompt | llm | parser,
    "translation": translate_prompt | llm | parser,
    "original": RunnablePassthrough(),  # 保留原始输入
})

result = parallel_chain.invoke({"text": "Python 是目前最受欢迎的编程语言之一。"})
print("原文:", result["original"])
print("摘要:", result["summary"])
print("翻译:", result["translation"])

5. RunnableLambda — 自定义函数

将任意 Python 函数包装成链中的一个步骤:

from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables import RunnableLambda

load_dotenv()

def add_exclamation(text: str) -> str:
    return text + "!!!"

def to_uppercase(text: str) -> str:
    return text.upper()

prompt = ChatPromptTemplate.from_messages([
    ("human", "用一个词形容{topic}"),
])

llm = ChatOpenAI(model="gpt-4o-mini")
parser = StrOutputParser()

chain = (
    prompt
    | llm
    | parser
    | RunnableLambda(add_exclamation)
    | RunnableLambda(to_uppercase)
)

result = chain.invoke({"topic": "Python"})
print(result)  # 类似 "优雅!!!" -> "优雅!!!"(大写对中文无效,英文单词会变大写)

6. 链的分支(条件路由)

使用 RunnableBranch 实现根据条件走不同的处理逻辑:

from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables import RunnableBranch

load_dotenv()

llm = ChatOpenAI(model="gpt-4o-mini")
parser = StrOutputParser()

python_prompt = ChatPromptTemplate.from_messages([
    ("system", "你是 Python 专家"),
    ("human", "{question}"),
])

java_prompt = ChatPromptTemplate.from_messages([
    ("system", "你是 Java 专家"),
    ("human", "{question}"),
])

default_prompt = ChatPromptTemplate.from_messages([
    ("system", "你是一位通用编程专家"),
    ("human", "{question}"),
])

chain = RunnableBranch(
    (lambda x: "python" in x["question"].lower(), python_prompt | llm | parser),
    (lambda x: "java" in x["question"].lower(), java_prompt | llm | parser),
    default_prompt | llm | parser,  # 默认分支
)

result = chain.invoke({"question": "Python 的列表和元组有什么区别?"})
print(result)

7. 查看链的结构

对于复杂的链,可以用 .get_graph().print_ascii() 打印出处理流程图:

chain = prompt | llm | parser
chain.get_graph().print_ascii()

总结

  • | 管道符将组件串联成链,数据从左向右流动
  • invoke() 单次、batch() 批量、stream() 流式、ainvoke() 异步
  • RunnablePassthrough 原样传递数据
  • RunnableParallel 同时运行多条支链
  • RunnableLambda 将普通函数包装进链
  • RunnableBranch 实现条件路由

评论