Skip to content
forked from gpustack/gpustack

Simple, scalable AI model deployment on GPU clusters

License

Notifications You must be signed in to change notification settings

ENg-122/gpustack

 
 

Repository files navigation


GPUStack


Documentation License WeChat Discord Follow on X(Twitter)


English | 简体中文 | 日本語


GPUStack is an open-source GPU cluster manager for running AI models.

Key Features

  • High Performance: Optimized for high-throughput and low-latency inference.
  • GPU Cluster Management: Efficiently manage multiple GPU clusters across different providers, including Docker-based, Kubernetes, and cloud platforms such as DigitalOcean.
  • Broad GPU Compatibility: Seamless support for GPUs from various vendors.
  • Extensive Model Support: Supports a wide range of models, including LLMs, VLMs, image models, audio models, embedding models, and rerank models.
  • Flexible Inference Backends: Built-in support for fast inference engines such as vLLM and SGLang, with the ability to integrate custom backends.
  • Multi-Version Backend Support: Run multiple versions of inference backends concurrently to meet diverse runtime requirements.
  • Distributed Inference: Supports single-node and multi-node, multi-GPU inference, including heterogeneous GPUs across vendors and environments.
  • Scalable GPU Architecture: Easily scale by adding more GPUs, nodes, or clusters to your infrastructure.
  • Robust Model Stability: Ensures high availability through automatic failure recovery, multi-instance redundancy, and intelligent load balancing.
  • Intelligent Deployment Evaluation: Automatically assesses model resource requirements, backend and architecture compatibility, OS compatibility, and other deployment factors.
  • Automated Scheduling: Dynamically allocates models based on available resources.
  • OpenAI-Compatible APIs: Fully compatible with OpenAI API specifications for seamless integration.
  • User & API Key Management: Simplified management of users and API keys.
  • Real-Time GPU Monitoring: Monitor GPU performance and utilization in real time.
  • Token and Rate Metrics: Track token usage and API request rates.

Installation

GPUStack now supports Linux only. For Windows, use WSL2 and avoid Docker Desktop.

If you are using NVIDIA GPUs, ensure the NVIDIA driver, Docker and NVIDIA Container Toolkit are installed. Then start the GPUStack with the following command:

sudo docker run -d --name gpustack \
    --restart unless-stopped \
    --privileged \
    --network host \
    --volume /var/run/docker.sock:/var/run/docker.sock \
    --volume gpustack-data:/var/lib/gpustack \
    --runtime nvidia \
    gpustack/gpustack

If you cannot pull images from Docker Hub or the download is very slow, you can use our Quay.io mirror by pointing your registry to quay.io:

sudo docker run -d --name gpustack \
    --restart unless-stopped \
    --privileged \
    --network host \
    --volume /var/run/docker.sock:/var/run/docker.sock \
    --volume gpustack-data:/var/lib/gpustack \
    --runtime nvidia \
    quay.io/gpustack/gpustack \
    --system-default-container-registry quay.io

For more details on the installation or other GPU hardware platforms, please refer to the Installation Requirements.

Check the GPUStack startup logs:

sudo docker logs -f gpustack

After GPUStack starts, run the following command to get the default admin password:

sudo docker exec gpustack cat /var/lib/gpustack/initial_admin_password

Open your browser and navigate to http://your_host_ip to access the GPUStack UI. Use the default username admin and the password you retrieved above to log in.

Deploy a Model

  1. Navigate to the Catalog page in the GPUStack UI.

  2. Select the Qwen3 0.6B model from the list of available models.

  3. After the deployment compatibility checks pass, click the Save button to deploy the model.

deploy qwen3 from catalog

  1. GPUStack will start downloading the model files and deploying the model. When the deployment status shows Running, the model has been deployed successfully.

model is running

  1. Click Playground - Chat in the navigation menu, check that the model qwen3-0.6b is selected from the top-right Model dropdown. Now you can chat with the model in the UI playground.

quick chat

Use the model via API

  1. Hover over the user avatar and navigate to the API Keys page, then click the New API Key button.

  2. Fill in the Name and click the Save button.

  3. Copy the generated API key and save it somewhere safe. Please note that you can only see it once on creation.

  4. You can now use the API key to access the OpenAI-compatible API endpoints provided by GPUStack. For example, use curl as the following:

# Replace `your_api_key` and `your_gpustack_server_url`
# with your actual API key and GPUStack server URL.
export GPUSTACK_API_KEY=your_api_key
curl http://your_gpustack_server_url/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $GPUSTACK_API_KEY" \
  -d '{
    "model": "qwen3-0.6b",
    "messages": [
      {
        "role": "system",
        "content": "You are a helpful assistant."
      },
      {
        "role": "user",
        "content": "Tell me a joke."
      }
    ],
    "stream": true
  }'

Supported Accelerators

GPUStack supports a variety of General-Purpose Accelerators, including:

  • NVIDIA GPU
  • AMD GPU
  • Ascend NPU
  • Hygon DCU (Experimental)
  • MThreads GPU (Experimental)
  • Iluvatar GPU (Experimental)
  • MetaX GPU (Experimental)
  • Cambricon MLU (Experimental)

Supported Models

GPUStack uses vLLM, SGLang, MindIE and vox-box as built-in inference backends, and it also supports any custom backend that can run in a container and expose a serving API. This allows GPUStack to work with a wide range of models.

Models can come from the following sources:

  1. Hugging Face

  2. ModelScope

  3. Local File Path

For information on which models are supported by each built-in inference backend, please refer to the supported models section in the Built-in Inference Backends documentation.

OpenAI-Compatible APIs

GPUStack serves the following OpenAI compatible APIs under the /v1 path:

For example, you can use the official OpenAI Python API library to consume the APIs:

from openai import OpenAI
client = OpenAI(base_url="http://your_gpustack_server_url/v1", api_key="your_api_key")

completion = client.chat.completions.create(
  model="llama3.2",
  messages=[
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Hello!"}
  ]
)

print(completion.choices[0].message)

GPUStack users can generate their own API keys in the UI.

Documentation

Please see the official docs site for complete documentation.

Build

  1. Install Python (version 3.10 to 3.12).

  2. Run make build.

You can find the built wheel package in dist directory.

Contributing

Please read the Contributing Guide if you're interested in contributing to GPUStack.

Join Community

Any issues or have suggestions, feel free to join our Community for support.

License

Copyright (c) 2024 The GPUStack authors

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at LICENSE file for details.

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

About

Simple, scalable AI model deployment on GPU clusters

Resources

License

Code of conduct

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 94.5%
  • Jinja 1.8%
  • Dockerfile 1.5%
  • Shell 1.2%
  • PowerShell 0.9%
  • Makefile 0.1%