API 文档 v1.0
大模型 API 接入文档
本平台提供完全兼容 OpenAI 接口规范的大模型调用服务,您只需将 base_url 指向本平台地址,即可无缝接入国内外 50+ 主流大模型,无需修改任何业务代码。
核心特性
兼容性
OpenAI 兼容
完全兼容 OpenAI SDK,改一行配置即可接入
覆盖面
50+ 主流模型
GPT、Claude、Gemini、DeepSeek、Qwen 等全覆盖
稳定性
99.9% 可用率
多节点负载均衡,自动故障切换
计费
按量付费
官方同价,无溢价,充值即用
接口基础信息
| 项目 | 说明 |
|---|---|
| Base URL | https://你的平台域名/v1 |
| 认证方式 | Bearer Token,请求头 Authorization: Bearer YOUR_API_KEY |
| 数据格式 | JSON,编码 UTF-8 |
| 协议 | HTTPS |
💡 提示:您的 API Key 可在平台控制台 → 「API Key 管理」中创建和查看。
快速开始
5 分钟快速接入
按照以下步骤,即可在 5 分钟内完成接入并发出第一个请求。
Step 1 — 获取 API Key
登录控制台,进入 API Key 管理 页面,点击「创建 API Key」,复制生成的 Key 备用。
Step 2 — 确认模型 ID
在 模型广场 查看所有可用模型,记录您要调用的 model_id(如 gpt-4o、deepseek-chat)。
Step 3 — 发送第一个请求
cURL
Python
Node.js
# 替换 YOUR_API_KEY 和 DOMAIN
curl https://DOMAIN/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"model": "gpt-4o-mini",
"messages": [{"role": "user", "content": "你好,介绍一下你自己"}]
}'
from openai import OpenAI
client = OpenAI(
api_key="YOUR_API_KEY",
base_url="https://DOMAIN/v1"
)
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "你好,介绍一下你自己"}]
)
print(response.choices[0].message.content)
import OpenAI from "openai";
const client = new OpenAI({
apiKey: "YOUR_API_KEY",
baseURL: "https://DOMAIN/v1",
});
const res = await client.chat.completions.create({
model: "gpt-4o-mini",
messages: [{ role: "user", content: "你好,介绍一下你自己" }],
});
console.log(res.choices[0].message.content);
模型列表
支持的模型
平台持续接入主流大模型,以下为部分常用模型,完整列表请在控制台「模型广场」查看。
OpenAI 系列
OpenAI
gpt-4o
文本 + 视觉OpenAI
gpt-4o-mini
文生文OpenAI
o1-mini
推理OpenAI
o3-mini
推理Anthropic Claude 系列
Anthropic
claude-3-5-sonnet
文本 + 视觉Anthropic
claude-3-5-haiku
文生文Anthropic
claude-3-opus
文本 + 视觉Google Gemini 系列
Google
gemini-2.0-flash
文本 + 视觉Google
gemini-1.5-pro
文本 + 视觉国产大模型
DeepSeek
deepseek-chat
文生文DeepSeek
deepseek-reasoner
推理阿里云
qwen-max
文生文阿里云
qwen-vl-max
文本 + 视觉百度
ernie-4.0
文生文字节跳动
doubao-pro-4k
文生文更多模型持续接入中,请访问控制台「模型广场」获取最新完整列表及价格信息。
API 参考
文生文 · Chat Completions
给定一段对话历史,模型生成下一条消息。这是最核心、使用最广泛的接口。
接口地址
POST
/v1/chat/completions
创建对话补全
请求参数
| 参数名 | 类型 | 是否必填 | 说明 |
|---|---|---|---|
model | string | 必填 | 模型 ID,如 gpt-4o、deepseek-chat |
messages | array | 必填 | 消息列表,每条包含 role(system/user/assistant)和 content |
stream | boolean | 可选 | 是否流式返回,默认 false |
max_tokens | integer | 可选 | 最大生成 Token 数 |
temperature | number | 可选 | 采样温度,0~2,默认 1.0。值越低越确定,值越高越随机 |
top_p | number | 可选 | 核采样,0~1,默认 1.0 |
n | integer | 可选 | 生成候选数量,默认 1 |
stop | string/array | 可选 | 停止词 |
响应结构
200 成功响应
{
"id": "chatcmpl-abc123",
"object": "chat.completion",
"created": 1711900000,
"model": "gpt-4o-mini",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "你好!我是一个大语言模型助手..."
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 12,
"completion_tokens": 48,
"total_tokens": 60
}
}
代码示例
cURL
Python
Node.js
Golang
curl -X POST https://DOMAIN/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"model": "gpt-4o-mini",
"messages": [
{"role": "system", "content": "你是一名专业的代码助手"},
{"role": "user", "content": "用 Python 写一个冒泡排序"}
],
"temperature": 0.7,
"max_tokens": 1024
}'
# pip install openai
from openai import OpenAI
client = OpenAI(
api_key="YOUR_API_KEY",
base_url="https://DOMAIN/v1"
)
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[
{"role": "system", "content": "你是一名专业的代码助手"},
{"role": "user", "content": "用 Python 写一个冒泡排序"},
],
temperature=0.7,
max_tokens=1024
)
print(response.choices[0].message.content)
print(f"Token 用量: {response.usage.total_tokens}")
// npm install openai
import OpenAI from "openai";
const client = new OpenAI({
apiKey: "YOUR_API_KEY",
baseURL: "https://DOMAIN/v1",
});
async function main() {
const res = await client.chat.completions.create({
model: "gpt-4o-mini",
messages: [
{ role: "system", content: "你是一名专业的代码助手" },
{ role: "user", content: "用 Python 写一个冒泡排序" },
],
temperature: 0.7,
max_tokens: 1024,
});
console.log(res.choices[0].message.content);
console.log(`Token 用量: ${res.usage.total_tokens}`);
}
main();
// go get github.com/sashabaranov/go-openai
package main
import (
"context"
"fmt"
openai "github.com/sashabaranov/go-openai"
)
func main() {
config := openai.DefaultConfig("YOUR_API_KEY")
config.BaseURL = "https://DOMAIN/v1"
client := openai.NewClientWithConfig(config)
resp, err := client.CreateChatCompletion(
context.Background(),
openai.ChatCompletionRequest{
Model: "gpt-4o-mini",
Messages: []openai.ChatCompletionMessage{
{Role: openai.ChatMessageRoleSystem, Content: "你是一名专业的代码助手"},
{Role: openai.ChatMessageRoleUser, Content: "用 Python 写一个冒泡排序"},
},
},
)
if err != nil {
panic(err)
}
fmt.Println(resp.Choices[0].Message.Content)
}
API 参考
图片理解 · Vision
通过在 content 中传入图片(URL 或 Base64),让模型理解图像内容。支持 Vision 的模型有 gpt-4o、claude-3-5-sonnet、gemini-2.0-flash、qwen-vl-max 等。
接口地址
POST
/v1/chat/completions
图片理解(同 Chat 接口)
图片理解使用的是同一个
/v1/chat/completions 接口,区别在于 messages[].content 传入一个数组,包含文本和图片两种类型的内容。
content 结构(多模态)
| 字段 | 类型 | 说明 |
|---|---|---|
type | string | text(文本)或 image_url(图片) |
text | string | 当 type 为 text 时的文本内容 |
image_url.url | string | 图片 URL 或 Base64 Data URI(data:image/jpeg;base64,...) |
image_url.detail | string | auto/low/high,控制识别精度,默认 auto |
请求示例
cURL
Python
Node.js
curl https://DOMAIN/v1/chat/completions \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o",
"messages": [{
"role": "user",
"content": [
{ "type": "text", "text": "这张图片里有什么?" },
{ "type": "image_url", "image_url": { "url": "https://example.com/image.jpg" } }
]
}]
}'
from openai import OpenAI
client = OpenAI(api_key="YOUR_API_KEY", base_url="https://DOMAIN/v1")
response = client.chat.completions.create(
model="gpt-4o",
messages=[{
"role": "user",
"content": [
{"type": "text", "text": "这张图片里有什么?"},
{"type": "image_url", "image_url": {"url": "https://example.com/image.jpg"}}
]
}]
)
print(response.choices[0].message.content)
import OpenAI from "openai";
const client = new OpenAI({ apiKey: "YOUR_API_KEY", baseURL: "https://DOMAIN/v1" });
const res = await client.chat.completions.create({
model: "gpt-4o",
messages: [{
role: "user",
content: [
{ type: "text", text: "这张图片里有什么?" },
{ type: "image_url", image_url: { url: "https://example.com/image.jpg" } }
]
}]
});
console.log(res.choices[0].message.content);
API 参考
流式输出 · Stream
设置 stream: true 后,接口以 Server-Sent Events(SSE)格式逐 Token 推送数据,适合实时对话场景。
接口地址
POST
/v1/chat/completions
stream: true
SSE 响应格式
SSE 流式数据片段
data: {"id":"chatcmpl-abc","object":"chat.completion.chunk","choices":[{"delta":{"content":"你"}}]}
data: {"id":"chatcmpl-abc","object":"chat.completion.chunk","choices":[{"delta":{"content":"好"}}]}
data: [DONE]
⚠️ 流式结束标志为
data: [DONE],收到后即可断开连接。示例代码
Python
Node.js
cURL
from openai import OpenAI
client = OpenAI(api_key="YOUR_API_KEY", base_url="https://DOMAIN/v1")
stream = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "用一句话介绍深度学习"}],
stream=True
)
for chunk in stream:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="", flush=True)
import OpenAI from "openai";
const client = new OpenAI({ apiKey: "YOUR_API_KEY", baseURL: "https://DOMAIN/v1" });
const stream = await client.chat.completions.create({
model: "gpt-4o-mini",
messages: [{ role: "user", content: "用一句话介绍深度学习" }],
stream: true,
});
for await (const chunk of stream) {
process.stdout.write(chunk.choices[0]?.delta?.content ?? "");
}
console.log();
curl https://DOMAIN/v1/chat/completions \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
--no-buffer \
-d '{
"model": "gpt-4o-mini",
"stream": true,
"messages": [{"role":"user","content":"用一句话介绍深度学习"}]
}'
代码示例
文生文完整示例
以下是各语言调用文生文接口的完整可运行示例,替换 YOUR_API_KEY 和 DOMAIN 即可运行。
HTTP / cURL
curl -X POST https://DOMAIN/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"model": "deepseek-chat",
"messages": [
{"role": "system", "content": "你是一名专业的代码助手"},
{"role": "user", "content": "用 Python 写一个冒泡排序"}
],
"temperature": 0.7,
"max_tokens": 1024
}'
Python(openai SDK)
# pip install openai
from openai import OpenAI
client = OpenAI(
api_key="YOUR_API_KEY",
base_url="https://DOMAIN/v1"
)
response = client.chat.completions.create(
model="deepseek-chat",
messages=[
{"role": "system", "content": "你是一名专业的代码助手"},
{"role": "user", "content": "用 Python 写一个冒泡排序"},
],
temperature=0.7,
max_tokens=1024
)
print(response.choices[0].message.content)
print(f"Token 用量: {response.usage.total_tokens}")
Node.js(openai SDK)
// npm install openai
import OpenAI from "openai";
const client = new OpenAI({
apiKey: "YOUR_API_KEY",
baseURL: "https://DOMAIN/v1",
});
async function main() {
const res = await client.chat.completions.create({
model: "deepseek-chat",
messages: [
{ role: "system", content: "你是一名专业的代码助手" },
{ role: "user", content: "用 Python 写一个冒泡排序" },
],
temperature: 0.7,
max_tokens: 1024,
});
console.log(res.choices[0].message.content);
console.log(`Token 用量: ${res.usage.total_tokens}`);
}
main();
Golang
// go get github.com/sashabaranov/go-openai
package main
import (
"context"
"fmt"
openai "github.com/sashabaranov/go-openai"
)
func main() {
config := openai.DefaultConfig("YOUR_API_KEY")
config.BaseURL = "https://DOMAIN/v1"
client := openai.NewClientWithConfig(config)
resp, err := client.CreateChatCompletion(
context.Background(),
openai.ChatCompletionRequest{
Model: "deepseek-chat",
Messages: []openai.ChatCompletionMessage{
{Role: openai.ChatMessageRoleSystem, Content: "你是一名专业的代码助手"},
{Role: openai.ChatMessageRoleUser, Content: "用 Python 写一个冒泡排序"},
},
},
)
if err != nil {
panic(err)
}
fmt.Println(resp.Choices[0].Message.Content)
}
代码示例
图片理解完整示例
以下示例演示如何通过 URL 或 Base64 传入图片,让模型识别图像内容。
Python — URL 图片
from openai import OpenAI
client = OpenAI(api_key="YOUR_API_KEY", base_url="https://DOMAIN/v1")
response = client.chat.completions.create(
model="gpt-4o",
messages=[{
"role": "user",
"content": [
{"type": "text", "text": "请描述这张图片的内容"},
{
"type": "image_url",
"image_url": {
"url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg",
"detail": "high"
}
}
]
}]
)
print(response.choices[0].message.content)
Python — Base64 本地图片
import base64
from openai import OpenAI
def encode_image(image_path):
with open(image_path, "rb") as f:
return base64.b64encode(f.read()).decode("utf-8")
client = OpenAI(api_key="YOUR_API_KEY", base_url="https://DOMAIN/v1")
b64 = encode_image("./image.jpg")
response = client.chat.completions.create(
model="gpt-4o",
messages=[{
"role": "user",
"content": [
{"type": "text", "text": "这张图里有几个人?"},
{"type": "image_url", "image_url": {"url": f"data:image/jpeg;base64,{b64}"}}
]
}]
)
print(response.choices[0].message.content)
Node.js
import fs from "fs";
import OpenAI from "openai";
const client = new OpenAI({ apiKey: "YOUR_API_KEY", baseURL: "https://DOMAIN/v1" });
const b64 = fs.readFileSync("./image.jpg").toString("base64");
const res = await client.chat.completions.create({
model: "gpt-4o",
messages: [{
role: "user",
content: [
{ type: "text", text: "这张图里有几个人?" },
{ type: "image_url", image_url: { url: `data:image/jpeg;base64,${b64}` } }
]
}]
});
console.log(res.choices[0].message.content);
Golang
package main
import (
"context"; "encoding/base64"; "fmt"; "os"
openai "github.com/sashabaranov/go-openai"
)
func main() {
config := openai.DefaultConfig("YOUR_API_KEY")
config.BaseURL = "https://DOMAIN/v1"
client := openai.NewClientWithConfig(config)
data, _ := os.ReadFile("./image.jpg")
b64 := base64.StdEncoding.EncodeToString(data)
dataURL := "data:image/jpeg;base64," + b64
resp, _ := client.CreateChatCompletion(context.Background(),
openai.ChatCompletionRequest{
Model: "gpt-4o",
Messages: []openai.ChatCompletionMessage{{
Role: openai.ChatMessageRoleUser,
MultiContent: []openai.ChatMessagePart{
{Type: openai.ChatMessagePartTypeText, Text: "这张图里有几个人?"},
{Type: openai.ChatMessagePartTypeImageURL, ImageURL: &openai.ChatMessageImageURL{URL: dataURL}},
},
}},
})
fmt.Println(resp.Choices[0].Message.Content)
}
代码示例
流式输出完整示例
流式输出可以实现"打字机"效果,极大提升用户体验。
Python
from openai import OpenAI
client = OpenAI(api_key="YOUR_API_KEY", base_url="https://DOMAIN/v1")
with client.chat.completions.stream(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "写一首关于春天的七言律诗"}]
) as stream:
for text in stream.text_stream:
print(text, end="", flush=True)
Node.js
import OpenAI from "openai";
const client = new OpenAI({ apiKey: "YOUR_API_KEY", baseURL: "https://DOMAIN/v1" });
const stream = await client.chat.completions.create({
model: "gpt-4o-mini",
messages: [{ role: "user", content: "写一首关于春天的七言律诗" }],
stream: true,
});
for await (const chunk of stream) {
process.stdout.write(chunk.choices[0]?.delta?.content ?? "");
}
console.log();
Golang(原生 HTTP)
package main
import (
"bufio"; "bytes"; "encoding/json"; "fmt"
"net/http"; "strings"
)
func main() {
body, _ := json.Marshal(map[string]any{
"model": "gpt-4o-mini",
"stream": true,
"messages": []map[string]string{{"role": "user", "content": "写一首七言律诗"}},
})
req, _ := http.NewRequest("POST", "https://DOMAIN/v1/chat/completions", bytes.NewBuffer(body))
req.Header.Set("Authorization", "Bearer YOUR_API_KEY")
req.Header.Set("Content-Type", "application/json")
resp, _ := http.DefaultClient.Do(req)
defer resp.Body.Close()
scanner := bufio.NewScanner(resp.Body)
for scanner.Scan() {
line := scanner.Text()
if !strings.HasPrefix(line, "data: ") { continue }
data := strings.TrimPrefix(line, "data: ")
if data == "[DONE]" { break }
var chunk map[string]any
json.Unmarshal([]byte(data), &chunk)
choices := chunk["choices"].([]any)
delta := choices[0].(map[string]any)["delta"].(map[string]any)
if content, ok := delta["content"]; ok {
fmt.Print(content)
}
}
}