Skip to content

GPU cluster manager for optimized AI model deployment

License

Notifications You must be signed in to change notification settings

shibosen/gpustack

 
 

Repository files navigation


GPUStack


Documentation License WeChat Discord Follow on X(Twitter)


English | 简体中文 | 日本語


Overview

GPUStack is an open-source GPU cluster manager designed for efficient AI model deployment. It lets you run models efficiently on your own GPU hardware by choosing the best inference engines, scheduling GPU resources, analyzing model architectures, and automatically configuring deployment parameters.

The following figure shows how GPUStack delivers improved inference throughput over the unoptimized vLLM baseline:

a100-throughput-comparison

For detailed benchmarking methods and results, visit our Inference Performance Lab.

Tested Inference Engines, GPUs, and Models

GPUStack uses a plug-in architecture that makes it easy to add new AI models, inference engines, and GPU hardware. We work closely with partners and the open-source community to test and optimize emerging models across different inference engines and GPUs. Below is the current list of supported inference engines, GPUs, and models, which will continue to expand over time.

Tested Inference Engines:

  • vLLM
  • SGLang
  • TensorRT-LLM
  • MindIE

Tested GPUs:

  • NVIDIA A100
  • NVIDIA H100/H200
  • Ascend 910B

Tuned Models:

  • Qwen3
  • gpt-oss
  • GLM-4.5-Air
  • GLM-4.5/4.6
  • DeepSeek-R1

Architecture

GPUStack enables development teams, IT organizations, and service providers to deliver Model-as-a-Service at scale. It supports industry-standard APIs for LLM, voice, image, and video models. The platform includes built-in user authentication and access control, real-time monitoring of GPU performance and utilization, and detailed metering of token usage and API request rates.

The figure below illustrates how a single GPUStack server can manage multiple GPU clusters across both on-premises and cloud environments. The GPUStack scheduler allocates GPUs to maximize resource utilization and selects the appropriate inference engines for optimal performance. Administrators also gain full visibility into system health and metrics through integrated Grafana and Prometheus dashboards.

gpustack-v2-architecture

GPUStack provides a powerful framework for deploying AI models. Its core features include:

  • Multi-Cluster GPU Management. Manages GPU clusters across multiple environments. This includes on-premises servers, Kubernetes clusters, and cloud providers.
  • Pluggable Inference Engines. Automatically configures high-performance inference engines such as vLLM, SGLang, and TensorRT-LLM. You can also add custom inference engines as needed.
  • Performance-Optimized Configurations. Offers pre-tuned modes for low latency or high throughput. GPUStack supports extended KV cache systems like LMCache and HiCache to reduce TTFT. It also includes built-in support for speculative decoding methods such as EAGLE3, MTP, and N-grams.
  • Enterprise-Grade Operations. Offers support for automated failure recovery, load balancing, monitoring, authentication, and access control.

Installation

GPUStack now supports Linux only.

If you are using NVIDIA GPUs, ensure the NVIDIA driver, Docker and NVIDIA Container Toolkit are installed. Then start the GPUStack with the following command:

sudo docker run -d --name gpustack \
    --restart unless-stopped \
    --privileged \
    --network host \
    --volume /var/run/docker.sock:/var/run/docker.sock \
    --volume gpustack-data:/var/lib/gpustack \
    --runtime nvidia \
    gpustack/gpustack

If you cannot pull images from Docker Hub or the download is very slow, you can use our Quay.io mirror by pointing your registry to quay.io:

sudo docker run -d --name gpustack \
    --restart unless-stopped \
    --privileged \
    --network host \
    --volume /var/run/docker.sock:/var/run/docker.sock \
    --volume gpustack-data:/var/lib/gpustack \
    --runtime nvidia \
    quay.io/gpustack/gpustack \
    --system-default-container-registry quay.io

For more details on the installation or other GPU hardware platforms, please refer to the Installation Requirements.

Check the GPUStack startup logs:

sudo docker logs -f gpustack

After GPUStack starts, run the following command to get the default admin password:

sudo docker exec gpustack cat /var/lib/gpustack/initial_admin_password

Open your browser and navigate to http://your_host_ip to access the GPUStack UI. Use the default username admin and the password you retrieved above to log in.

Deploy a Model

  1. Navigate to the Catalog page in the GPUStack UI.

  2. Select the Qwen3 0.6B model from the list of available models.

  3. After the deployment compatibility checks pass, click the Save button to deploy the model.

deploy qwen3 from catalog

  1. GPUStack will start downloading the model files and deploying the model. When the deployment status shows Running, the model has been deployed successfully.

model is running

  1. Click Playground - Chat in the navigation menu, check that the model qwen3-0.6b is selected from the top-right Model dropdown. Now you can chat with the model in the UI playground.

quick chat

Use the model via API

  1. Hover over the user avatar and navigate to the API Keys page, then click the New API Key button.

  2. Fill in the Name and click the Save button.

  3. Copy the generated API key and save it somewhere safe. Please note that you can only see it once on creation.

  4. You can now use the API key to access the OpenAI-compatible API endpoints provided by GPUStack. For example, use curl as the following:

# Replace `your_api_key` and `your_gpustack_server_url`
# with your actual API key and GPUStack server URL.
export GPUSTACK_API_KEY=your_api_key
curl http://your_gpustack_server_url/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $GPUSTACK_API_KEY" \
  -d '{
    "model": "qwen3-0.6b",
    "messages": [
      {
        "role": "system",
        "content": "You are a helpful assistant."
      },
      {
        "role": "user",
        "content": "Tell me a joke."
      }
    ],
    "stream": true
  }'

Documentation

Please see the official docs site for complete documentation.

Build

  1. Install Python (version 3.10 to 3.12).

  2. Run make build.

You can find the built wheel package in dist directory.

Contributing

Please read the Contributing Guide if you're interested in contributing to GPUStack.

Join Community

Any issues or have suggestions, feel free to join our Community for support.

License

Copyright (c) 2024-2025 The GPUStack authors

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at LICENSE file for details.

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

About

GPU cluster manager for optimized AI model deployment

Resources

License

Code of conduct

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 94.5%
  • Jinja 1.8%
  • Dockerfile 1.5%
  • Shell 1.2%
  • PowerShell 0.9%
  • Makefile 0.1%