Function calling
The upstream README says function calling is supported in the local API path, with a temporary `stream=False` limitation.
Community-run docs surface, derived from public Qwen source materials and not presented as the primary upstream home by default.
API
The upstream docs cover both a local OpenAI-compatible API via `openai_api.py` and a managed DashScope option for hosted access.
The local API example installs FastAPI, Uvicorn, `openai<1.0`, Pydantic, and `sse_starlette`, then runs `openai_api.py`.
If you do not want to run local serving infrastructure, the README separately points to DashScope as the managed API entry.
import openai
openai.api_base = "http://localhost:8000/v1"
openai.api_key = "none"
response = openai.ChatCompletion.create(
model="Qwen",
messages=[{"role": "user", "content": "你好"}],
stream=False,
stop=[]
)
print(response.choices[0].message.content)
The upstream README says function calling is supported in the local API path, with a temporary `stream=False` limitation.
Use DashScope when you need a hosted entry point rather than a local compatibility layer.
Open linkFastChat also exposes an OpenAI-like server in the vLLM deployment flow.
Source anchors