GPUStack is an open-source GPU cluster manager for running AI models.
- High Performance: Optimized for high-throughput and low-latency inference.
- GPU Cluster Management: Efficiently manage multiple GPU clusters across different providers, including Docker-based, Kubernetes, and cloud platforms such as DigitalOcean.
- Broad GPU Compatibility: Seamless support for GPUs from various vendors.
- Extensive Model Support: Supports a wide range of models, including LLMs, VLMs, image models, audio models, embedding models, and rerank models.
- Flexible Inference Backends: Built-in support for fast inference engines such as vLLM and SGLang, with the ability to integrate custom backends.
- Multi-Version Backend Support: Run multiple versions of inference backends concurrently to meet diverse runtime requirements.
- Distributed Inference: Supports single-node and multi-node, multi-GPU inference, including heterogeneous GPUs across vendors and environments.
- Scalable GPU Architecture: Easily scale by adding more GPUs, nodes, or clusters to your infrastructure.
- Robust Model Stability: Ensures high availability through automatic failure recovery, multi-instance redundancy, and intelligent load balancing.
- Intelligent Deployment Evaluation: Automatically assesses model resource requirements, backend and architecture compatibility, OS compatibility, and other deployment factors.
- Automated Scheduling: Dynamically allocates models based on available resources.
- OpenAI-Compatible APIs: Fully compatible with OpenAI API specifications for seamless integration.
- User & API Key Management: Simplified management of users and API keys.
- Real-Time GPU Monitoring: Monitor GPU performance and utilization in real time.
- Token and Rate Metrics: Track token usage and API request rates.
GPUStack now supports Linux only. For Windows, use WSL2 and avoid Docker Desktop.
If you are using NVIDIA GPUs, ensure the NVIDIA driver, Docker and NVIDIA Container Toolkit are installed. Then start the GPUStack with the following command:
sudo docker run -d --name gpustack \
--restart unless-stopped \
--privileged \
--network host \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume gpustack-data:/var/lib/gpustack \
--runtime nvidia \
gpustack/gpustackIf you cannot pull images from Docker Hub or the download is very slow, you can use our Quay.io mirror by pointing your registry to quay.io:
sudo docker run -d --name gpustack \
--restart unless-stopped \
--privileged \
--network host \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume gpustack-data:/var/lib/gpustack \
--runtime nvidia \
quay.io/gpustack/gpustack \
--system-default-container-registry quay.ioFor more details on the installation or other GPU hardware platforms, please refer to the Installation Requirements.
Check the GPUStack startup logs:
sudo docker logs -f gpustackAfter GPUStack starts, run the following command to get the default admin password:
sudo docker exec gpustack cat /var/lib/gpustack/initial_admin_passwordOpen your browser and navigate to http://your_host_ip to access the GPUStack UI. Use the default username admin and the password you retrieved above to log in.
-
Navigate to the
Catalogpage in the GPUStack UI. -
Select the
Qwen3 0.6Bmodel from the list of available models. -
After the deployment compatibility checks pass, click the
Savebutton to deploy the model.
- GPUStack will start downloading the model files and deploying the model. When the deployment status shows
Running, the model has been deployed successfully.
- Click
Playground - Chatin the navigation menu, check that the modelqwen3-0.6bis selected from the top-rightModeldropdown. Now you can chat with the model in the UI playground.
-
Hover over the user avatar and navigate to the
API Keyspage, then click theNew API Keybutton. -
Fill in the
Nameand click theSavebutton. -
Copy the generated API key and save it somewhere safe. Please note that you can only see it once on creation.
-
You can now use the API key to access the OpenAI-compatible API endpoints provided by GPUStack. For example, use curl as the following:
# Replace `your_api_key` and `your_gpustack_server_url`
# with your actual API key and GPUStack server URL.
export GPUSTACK_API_KEY=your_api_key
curl http://your_gpustack_server_url/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $GPUSTACK_API_KEY" \
-d '{
"model": "qwen3-0.6b",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Tell me a joke."
}
],
"stream": true
}'GPUStack supports a variety of General-Purpose Accelerators, including:
- NVIDIA GPU
- AMD GPU
- Ascend NPU
- Hygon DCU (Experimental)
- MThreads GPU (Experimental)
- Iluvatar GPU (Experimental)
- MetaX GPU (Experimental)
- Cambricon MLU (Experimental)
GPUStack uses vLLM, SGLang, MindIE and vox-box as built-in inference backends, and it also supports any custom backend that can run in a container and expose a serving API. This allows GPUStack to work with a wide range of models.
Models can come from the following sources:
-
Local File Path
For information on which models are supported by each built-in inference backend, please refer to the supported models section in the Built-in Inference Backends documentation.
GPUStack serves the following OpenAI compatible APIs under the /v1 path:
- List Models
- Create Completion
- Create Chat Completion
- Create Embeddings
- Create Image
- Create Image Edit
- Create Speech
- Create Transcription
For example, you can use the official OpenAI Python API library to consume the APIs:
from openai import OpenAI
client = OpenAI(base_url="http://your_gpustack_server_url/v1", api_key="your_api_key")
completion = client.chat.completions.create(
model="llama3.2",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"}
]
)
print(completion.choices[0].message)GPUStack users can generate their own API keys in the UI.
Please see the official docs site for complete documentation.
-
Install Python (version 3.10 to 3.12).
-
Run
make build.
You can find the built wheel package in dist directory.
Please read the Contributing Guide if you're interested in contributing to GPUStack.
Any issues or have suggestions, feel free to join our Community for support.
Copyright (c) 2024 The GPUStack authors
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at LICENSE file for details.
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.



