Function Calling 与 MCP 实战
Function Calling 原理
Function Calling 让 LLM 能够调用外部函数/API。流程:
1. 你定义可用的函数(JSON Schema)
2. 用户提问
3. LLM 决定是否需要调用函数
4. 如果需要,LLM 返回函数名和参数
5. 你执行函数,返回结果
6. LLM 基于结果生成最终回答
Anthropic Tool Use vs OpenAI Function Calling
| 维度 | Anthropic | OpenAI |
|---|---|---|
| 术语 | Tool Use | Function Calling |
| 定义格式 | input_schema |
parameters |
| 返回格式 | tool_use block |
tool_calls array |
| 并行调用 | 支持 | 支持 |
JSON Schema 定义工具
# Anthropic 格式
tool = {
"name": "query_database",
"description": "Query the user database",
"input_schema": {
"type": "object",
"properties": {
"sql": {
"type": "string",
"description": "SQL query to execute"
},
"limit": {
"type": "integer",
"description": "Max rows to return",
"default": 10
}
},
"required": ["sql"]
}
}
# OpenAI 格式
function = {
"name": "query_database",
"description": "Query the user database",
"parameters": {
"type": "object",
"properties": {
"sql": {"type": "string"},
"limit": {"type": "integer", "default": 10}
},
"required": ["sql"]
}
}
实战:让 AI 操作数据库
import anthropic
import sqlite3
client = anthropic.Anthropic()
db = sqlite3.connect("users.db")
tools = [{
"name": "query_db",
"description": "Execute SQL query on user database",
"input_schema": {
"type": "object",
"properties": {
"sql": {"type": "string"}
},
"required": ["sql"]
}
}]
def query_db(sql: str) -> str:
cursor = db.execute(sql)
rows = cursor.fetchall()
return str(rows)
def chat(user_message: str):
messages = [{"role": "user", "content": user_message}]
response = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1024,
tools=tools,
messages=messages
)
if response.stop_reason == "tool_use":
tool_use = next(b for b in response.content if b.type == "tool_use")
result = query_db(tool_use.input["sql"])
messages.append({"role": "assistant", "content": response.content})
messages.append({
"role": "user",
"content": [{
"type": "tool_result",
"tool_use_id": tool_use.id,
"content": result
}]
})
final_response = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1024,
messages=messages
)
return final_response.content[0].text
return response.content[0].text
# 使用
print(chat("有多少个用户?"))
# AI 会生成 SQL: SELECT COUNT(*) FROM users
MCP 协议介绍
MCP(Model Context Protocol)是 Anthropic 推出的标准化工具协议,让 AI 应用能够连接各种数据源和工具。
MCP 架构
AI 应用(Claude Code / 你的 App)
↓ MCP Client
MCP Server(你实现的工具服务)
↓
外部系统(数据库 / API / 文件系统)
构建 MCP Server 步骤
- 定义工具(Tools)
- 实现工具逻辑
- 启动 MCP Server
- 在 AI 应用中配置 Server
示例:文件系统 MCP Server
from mcp.server import Server
from mcp.types import Tool, TextContent
import os
server = Server("filesystem")
@server.list_tools()
async def list_tools() -> list[Tool]:
return [
Tool(
name="read_file",
description="Read file contents",
inputSchema={
"type": "object",
"properties": {
"path": {"type": "string"}
},
"required": ["path"]
}
),
Tool(
name="list_dir",
description="List directory contents",
inputSchema={
"type": "object",
"properties": {
"path": {"type": "string"}
},
"required": ["path"]
}
)
]
@server.call_tool()
async def call_tool(name: str, arguments: dict) -> list[TextContent]:
if name == "read_file":
with open(arguments["path"]) as f:
content = f.read()
return [TextContent(type="text", text=content)]
elif name == "list_dir":
files = os.listdir(arguments["path"])
return [TextContent(type="text", text="\n".join(files))]
# 启动 Server
server.run()