AI API 调用入门
什么是 AI API
AI API 是一个 HTTP 接口,你发送文本(提示词),它返回 AI 生成的文本(回复)。就像发短信一样 — 你发一条消息,AI 回一条消息。
Anthropic Messages API
Anthropic 的 Claude 使用 Messages API。
curl 示例
curl https://api.anthropic.com/v1/messages \
-H "content-type: application/json" \
-H "x-api-key: YOUR_API_KEY" \
-H "anthropic-version: 2023-06-01" \
-d '{
"model": "claude-sonnet-4-20250514",
"max_tokens": 1024,
"messages": [
{"role": "user", "content": "Hello, Claude!"}
]
}'
Python 示例
import anthropic
client = anthropic.Anthropic(api_key="YOUR_API_KEY")
message = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1024,
messages=[
{"role": "user", "content": "用一句话解释什么是 API"}
]
)
print(message.content[0].text)
TypeScript 示例
import Anthropic from "@anthropic-ai/sdk";
const client = new Anthropic({ apiKey: "YOUR_API_KEY" });
const message = await client.messages.create({
model: "claude-sonnet-4-20250514",
max_tokens: 1024,
messages: [
{ role: "user", content: "用一句话解释什么是 API" }
],
});
console.log(message.content[0].text);
OpenAI Chat Completions API
OpenAI 的 GPT 使用 Chat Completions API。
curl 示例
curl https://api.openai.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"model": "gpt-4o",
"messages": [
{"role": "user", "content": "Hello!"}
]
}'
Python 示例
from openai import OpenAI
client = OpenAI(api_key="YOUR_API_KEY")
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "user", "content": "用一句话解释什么是 API"}
]
)
print(response.choices[0].message.content)
TypeScript 示例
import OpenAI from "openai";
const client = new OpenAI({ apiKey: "YOUR_API_KEY" });
const response = await client.chat.completions.create({
model: "gpt-4o",
messages: [
{ role: "user", content: "用一句话解释什么是 API" }
],
});
console.log(response.choices[0].message.content);
流式响应(Streaming)
默认情况下,API 会等 AI 生成完所有内容后一次性返回。流式响应让你可以逐字接收,实现"打字机"效果。
Anthropic 流式(Python)
import anthropic
client = anthropic.Anthropic()
with client.messages.stream(
model="claude-sonnet-4-20250514",
max_tokens=1024,
messages=[{"role": "user", "content": "写一首关于编程的诗"}],
) as stream:
for text in stream.text_stream:
print(text, end="", flush=True)
OpenAI 流式(Python)
from openai import OpenAI
client = OpenAI()
stream = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "写一首关于编程的诗"}],
stream=True,
)
for chunk in stream:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="", flush=True)
错误处理
常见错误码:
| 状态码 | 含义 | 处理方式 |
|---|---|---|
| 400 | 请求格式错误 | 检查 JSON 格式和参数 |
| 401 | API Key 无效 | 检查 Key 是否正确 |
| 429 | 请求过多(限流) | 等待后重试,加指数退避 |
| 500 | 服务器错误 | 等待后重试 |
| 529 | API 过载 | 等待后重试 |
重试策略(Python)
import time
import anthropic
client = anthropic.Anthropic()
def call_with_retry(messages, max_retries=3):
for attempt in range(max_retries):
try:
return client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1024,
messages=messages,
)
except anthropic.RateLimitError:
wait = 2 ** attempt # 1s, 2s, 4s
print(f"Rate limited, waiting {wait}s...")
time.sleep(wait)
except anthropic.APIError as e:
if e.status_code >= 500:
time.sleep(2 ** attempt)
else:
raise
raise Exception("Max retries exceeded")
使用 Inkess API 代理
通过 Inkess LLM 代理服务,你可以用更低的价格访问 Claude、GPT 等模型,且 SDK 完全兼容。
只需修改 base URL:
# Anthropic SDK
client = anthropic.Anthropic(
api_key="YOUR_INKESS_KEY",
base_url="https://llm.starapp.net/api/llm",
)
# OpenAI SDK
client = OpenAI(
api_key="YOUR_INKESS_KEY",
base_url="https://llm.starapp.net/api/llm/v1",
)
其他代码完全不用改。
总结
| API | 提供商 | 端点 | 认证方式 |
|---|---|---|---|
| Messages API | Anthropic | /v1/messages |
x-api-key header |
| Chat Completions | OpenAI | /v1/chat/completions |
Authorization: Bearer |
| Inkess 代理 | Inkess | 同上 | 同上(换 base URL) |