API

Keep the API story explicit: local compatibility and managed alternatives

The upstream docs cover both a local OpenAI-compatible API via `openai_api.py` and a managed DashScope option for hosted access.

OpenAI-style APIFunction callingDashScope

Two API paths

The local API example installs FastAPI, Uvicorn, `openai<1.0`, Pydantic, and `sse_starlette`, then runs `openai_api.py`.

If you do not want to run local serving infrastructure, the README separately points to DashScope as the managed API entry.

OpenAI-compatible client call

import openai

openai.api_base = "http://localhost:8000/v1"
openai.api_key = "none"

response = openai.ChatCompletion.create(
    model="Qwen",
    messages=[{"role": "user", "content": "你好"}],
    stream=False,
    stop=[]
)

print(response.choices[0].message.content)

API notes

Local API

Function calling

The upstream README says function calling is supported in the local API path, with a temporary `stream=False` limitation.

Managed API

DashScope

Use DashScope when you need a hosted entry point rather than a local compatibility layer.

Open link

Serving stack

FastChat OpenAI server

FastChat also exposes an OpenAI-like server in the vLLM deployment flow.

Source anchors

API Surface | Qwen Code