numaaware: add GPU NUMA topology awareness to scheduler#5095
numaaware: add GPU NUMA topology awareness to scheduler#5095pmady wants to merge 6 commits intovolcano-sh:masterfrom
Conversation
Add GPUInfo, GPUDetails types and helper methods (NUMANodes, GPUsInNUMANodes) to support GPU-to-NUMA node mapping in the scheduler API. Extend NumatopoInfo struct with GPUDetail field and update DeepCopy to include GPU topology data. This enables the numaaware plugin to consider GPU PCIe affinity when making scheduling decisions for GPU workloads. Fixes volcano-sh#4998 Signed-off-by: pmady <pavan4devops@gmail.com>
Add gpumanager package that implements the policy.HintProvider interface for nvidia.com/gpu resources. The provider: - Generates topology hints based on GPU-to-NUMA node affinity - Prefers allocations from the minimum number of NUMA nodes - Allocates GPUs from preferred NUMA nodes first, then spills over - Follows the same pattern as the existing cpumanager provider Fixes volcano-sh#4998 Signed-off-by: pmady <pavan4devops@gmail.com>
Register the gpumanager HintProvider alongside the existing cpumanager provider so that GPU NUMA topology is considered during predicate and allocation phases. Update getNodeNumaNumForTask to include GPU NUMA node count in the scoring function. Nodes where both CPU and GPU assignments span fewer NUMA nodes receive higher scores, improving locality for mixed CPU+GPU workloads. Add getNumaNodeCntForGPUID helper to count NUMA nodes used by GPU assignments. Fixes volcano-sh#4998 Signed-off-by: pmady <pavan4devops@gmail.com>
Test coverage includes: - 4 GPUs from single NUMA node (both NUMA 0 and NUMA 1) - 4 GPUs when only one NUMA node has enough available - 5 GPUs requiring cross-NUMA allocation - 8 GPUs using all available across both NUMA nodes - 9 GPUs requested (impossible, no hints returned) - Container with no GPU request (nil hints) - Allocation from preferred NUMA node - Allocation with partial NUMA availability All tests follow existing table-driven test pattern from cpumanager. Fixes volcano-sh#4998 Signed-off-by: pmady <pavan4devops@gmail.com>
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
|
Welcome @pmady! It looks like this is your first PR to volcano-sh/volcano 🎉 |
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces GPU NUMA topology awareness to the numaaware scheduler plugin, a valuable feature for performance-sensitive GPU workloads. While the changes are well-structured and follow existing patterns, two medium-severity security issues were identified: a potential Denial of Service (DoS) due to exponential complexity in bitmask iteration with many NUMA nodes, and a potential integer overflow when handling large GPU resource requests. These require addressing with appropriate input validation and consistent data types. Additionally, a significant issue in the scoring logic could lead to suboptimal scheduling decisions, and there are a few minor suggestions for improvement.
| cpuNumaCnt := getNumaNodeCntForCPUID(assignCpus, node.NumaSchedulerInfo.CPUDetail) | ||
|
|
||
| // Include GPU NUMA node count in scoring if GPU assignments exist. | ||
| gpuNumaCnt := 0 | ||
| assignGPUs := resAssignMap[node.Name][string(gpumanager.NvidiaGPUResource)] | ||
| if assignGPUs.Size() > 0 && node.NumaSchedulerInfo.GPUDetail != nil { | ||
| gpuNumaCnt = getNumaNodeCntForGPUID(assignGPUs, node.NumaSchedulerInfo.GPUDetail) | ||
| } | ||
|
|
||
| // Total NUMA node count: prefer nodes where both CPU and GPU | ||
| // assignments span fewer NUMA nodes. | ||
| totalNumaCnt := cpuNumaCnt + gpuNumaCnt | ||
| nodeNumaCnts[index] = api.ScoredNode{ | ||
| NodeName: node.Name, | ||
| Score: int64(getNumaNodeCntForCPUID(assignCpus, node.NumaSchedulerInfo.CPUDetail)), | ||
| Score: int64(totalNumaCnt), | ||
| } |
There was a problem hiding this comment.
The current scoring logic totalNumaCnt := cpuNumaCnt + gpuNumaCnt incorrectly calculates the total number of NUMA nodes used. It sums the counts of NUMA nodes for CPUs and GPUs separately, which doesn't correctly represent the total span of NUMA nodes when resources share a NUMA node. For example, if both CPUs and GPUs are on NUMA node 0, the current logic would score it as 1+1=2, which is the same score as when CPUs are on NUMA 0 and GPUs are on NUMA 1. The score should be the count of the union of NUMA nodes used by both CPUs and GPUs.
To fix this, you should calculate the union of NUMA node bitmasks for CPUs and GPUs first, and then count the bits in the resulting mask for the score. This will require changing getNumaNodeCntForGPUID to return a bitmask.BitMask, and also modifying getNumaNodeCntForCPUID (or creating a new helper) to return a bitmask.
| cpuNumaCnt := getNumaNodeCntForCPUID(assignCpus, node.NumaSchedulerInfo.CPUDetail) | |
| // Include GPU NUMA node count in scoring if GPU assignments exist. | |
| gpuNumaCnt := 0 | |
| assignGPUs := resAssignMap[node.Name][string(gpumanager.NvidiaGPUResource)] | |
| if assignGPUs.Size() > 0 && node.NumaSchedulerInfo.GPUDetail != nil { | |
| gpuNumaCnt = getNumaNodeCntForGPUID(assignGPUs, node.NumaSchedulerInfo.GPUDetail) | |
| } | |
| // Total NUMA node count: prefer nodes where both CPU and GPU | |
| // assignments span fewer NUMA nodes. | |
| totalNumaCnt := cpuNumaCnt + gpuNumaCnt | |
| nodeNumaCnts[index] = api.ScoredNode{ | |
| NodeName: node.Name, | |
| Score: int64(getNumaNodeCntForCPUID(assignCpus, node.NumaSchedulerInfo.CPUDetail)), | |
| Score: int64(totalNumaCnt), | |
| } | |
| cpuNumaMask := getNumaMaskForCPUID(assignCpus, node.NumaSchedulerInfo.CPUDetail) | |
| // Include GPU NUMA node count in scoring if GPU assignments exist. | |
| gpuNumaMask, _ := bitmask.NewBitMask() | |
| assignGPUs := resAssignMap[node.Name][string(gpumanager.NvidiaGPUResource)] | |
| if assignGPUs.Size() > 0 && node.NumaSchedulerInfo.GPUDetail != nil { | |
| gpuNumaMask = getNumaMaskForGPUID(assignGPUs, node.NumaSchedulerInfo.GPUDetail) | |
| } | |
| // Total NUMA node count: prefer nodes where both CPU and GPU | |
| // assignments span the minimum number of unique NUMA nodes. | |
| totalNumaMask := cpuNumaMask.Or(gpuNumaMask) | |
| nodeNumaCnts[index] = api.ScoredNode{ | |
| NodeName: node.Name, | |
| Score: int64(totalNumaMask.Count()), | |
| } |
| func getNumaNodeCntForGPUID(gpus cpuset.CPUSet, gpuDetails api.GPUDetails) int { | ||
| mask, _ := bitmask.NewBitMask() | ||
| for _, gpuIdx := range gpus.List() { | ||
| if info, ok := gpuDetails[gpuIdx]; ok { | ||
| mask.Add(info.NUMANodeID) | ||
| } | ||
| } | ||
| return mask.Count() | ||
| } |
There was a problem hiding this comment.
To implement the corrected scoring logic as suggested in the other comment, this function should be renamed to getNumaMaskForGPUID and return a bitmask.BitMask instead of an int.
| func getNumaNodeCntForGPUID(gpus cpuset.CPUSet, gpuDetails api.GPUDetails) int { | |
| mask, _ := bitmask.NewBitMask() | |
| for _, gpuIdx := range gpus.List() { | |
| if info, ok := gpuDetails[gpuIdx]; ok { | |
| mask.Add(info.NUMANodeID) | |
| } | |
| } | |
| return mask.Count() | |
| } | |
| func getNumaMaskForGPUID(gpus cpuset.CPUSet, gpuDetails api.GPUDetails) bitmask.BitMask { | |
| mask, _ := bitmask.NewBitMask() | |
| for _, gpuIdx := range gpus.List() { | |
| if info, ok := gpuDetails[gpuIdx]; ok { | |
| mask.Add(info.NUMANodeID) | |
| } | |
| } | |
| return mask | |
| } |
| minAffinitySize := gpuDetail.NUMANodes().Size() | ||
| hints := []policy.TopologyHint{} | ||
|
|
||
| bitmask.IterateBitMasks(gpuDetail.NUMANodes().List(), func(mask bitmask.BitMask) { |
There was a problem hiding this comment.
The generateGPUTopologyHints function calls bitmask.IterateBitMasks with a list of NUMA node IDs. This function iterates over all possible non-empty combinations of the provided NUMA nodes, which has a complexity of
| if !ok { | ||
| return 0 | ||
| } | ||
| return int(gpuQuantity.Value()) |
There was a problem hiding this comment.
The requestedGPUs function converts a GPU resource request value from int64 to int. On 32-bit systems, or with extremely large request values, this can result in an integer overflow. A negative value resulting from such an overflow would cause incorrect behavior in the hint generation and allocation logic, potentially leading to suboptimal or incorrect scheduling decisions.
| @@ -0,0 +1,186 @@ | |||
| /* | |||
| Copyright 2026 The Volcano Authors. | |||
| // Count how many available GPUs fall within this NUMA node combination. | ||
| numMatching := 0 | ||
| for _, gpuIdx := range availableGPUs.List() { | ||
| if gpuInfo, ok := gpuDetail[gpuIdx]; ok { | ||
| if mask.IsSet(gpuInfo.NUMANodeID) { | ||
| numMatching++ | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
The calculation of numMatching is performed inside the IterateBitMasks loop, which iterates over all available GPUs for each NUMA node combination. This can be inefficient. You can optimize this by pre-calculating the number of available GPUs per NUMA node before the loop.
For example:
// Before IterateBitMasks
availablePerNuma := make(map[int]int)
for _, gpuIdx := range availableGPUs.List() {
if gpuInfo, ok := gpuDetail[gpuIdx]; ok {
availablePerNuma[gpuInfo.NUMANodeID]++
}
}
bitmask.IterateBitMasks(..., func(mask bitmask.BitMask) {
// ...
// Replace the loop with this:
numMatching := 0
for _, numaID := range mask.GetBits() {
numMatching += availablePerNuma[numaID]
}
// ...
})| @@ -0,0 +1,305 @@ | |||
| /* | |||
| Copyright 2026 The Volcano Authors. | |||
There was a problem hiding this comment.
Pull request overview
Adds GPU NUMA-topology awareness to the numaaware scheduler plugin so GPU workloads are preferentially placed on nodes where GPU allocations span fewer NUMA nodes (reducing cross-NUMA PCIe/NVLink traffic).
Changes:
- Extends scheduler NUMA topology API (
NumatopoInfo) with GPU topology (GPUDetail) and helper methods. - Introduces a new
gpumanagerhint provider to generate/allocate GPU topology hints fornvidia.com/gpu. - Updates
numaawarescoring to account for GPU NUMA node span in addition to CPU.
Reviewed changes
Copilot reviewed 4 out of 4 changed files in this pull request and generated 2 comments.
| File | Description |
|---|---|
| pkg/scheduler/api/numa_info.go | Adds GPU topology types/helpers and deep-copy support via GPUDetail. |
| pkg/scheduler/plugins/numaaware/provider/gpumanager/gpu_mng.go | Implements GPU topology hint generation and allocation for nvidia.com/gpu. |
| pkg/scheduler/plugins/numaaware/provider/gpumanager/gpu_mng_test.go | Adds unit tests for GPU hint generation and allocation behavior. |
| pkg/scheduler/plugins/numaaware/numaaware.go | Registers GPU provider and includes GPU NUMA span in node scoring. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
You can also share your feedback on Copilot code review. Take the survey.
| func generateGPUTopologyHints(availableGPUs cpuset.CPUSet, gpuDetail api.GPUDetails, request int) []policy.TopologyHint { | ||
| minAffinitySize := gpuDetail.NUMANodes().Size() | ||
| hints := []policy.TopologyHint{} | ||
|
|
||
| bitmask.IterateBitMasks(gpuDetail.NUMANodes().List(), func(mask bitmask.BitMask) { | ||
| // Count the total GPUs in the NUMA nodes covered by this mask. | ||
| gpusInMask := gpuDetail.GPUsInNUMANodes(mask.GetBits()...).Size() | ||
| if gpusInMask >= request && mask.Count() < minAffinitySize { | ||
| minAffinitySize = mask.Count() | ||
| } |
There was a problem hiding this comment.
In generateGPUTopologyHints, minAffinitySize is updated based on the total GPUs in a NUMA-mask (gpuDetail.GPUsInNUMANodes), not on the available GPUs. If each single NUMA node has enough total GPUs but not enough available GPUs to satisfy the request, all feasible hints will have mask.Count() > minAffinitySize and will end up with Preferred=false (i.e., no preferred hint), which can degrade/incorrectly influence policy merging. Compute minAffinitySize from the masks that actually satisfy numMatching >= request (or compute min over the constructed hints) and add a unit test for the ‘must span NUMA nodes due to availability’ case.
| // NumatopoInfo is the information about topology manager on the node | ||
| type NumatopoInfo struct { | ||
| Namespace string | ||
| Name string | ||
| Policies map[nodeinfov1alpha1.PolicyName]string | ||
| NumaResMap map[string]*ResourceInfo | ||
| CPUDetail topology.CPUDetails | ||
| GPUDetail GPUDetails | ||
| ResReserved v1.ResourceList | ||
| } |
There was a problem hiding this comment.
NumatopoInfo now includes GPUDetail, and gpumanager/scoring depend on it, but in this repo the scheduler cache conversion (SchedulerCache.getNumaInfo in pkg/scheduler/cache/event_handlers.go) currently only populates CPUDetail/NumaResMap/ResReserved and does not set GPUDetail. With the current wiring, topoInfo.GPUDetail will remain empty and the GPU NUMA-awareness logic will never be exercised. Please extend the Numatopology source (CRD/spec + cache conversion) to carry GPU topology data into NumatopoInfo (and ensure the resource name keys match, e.g. "nvidia.com/gpu").
There was a problem hiding this comment.
This is fully missing, the feature basically has no runtime plumbing.
The previous scoring logic summed CPU and GPU NUMA node counts separately (cpuNumaCnt + gpuNumaCnt), which incorrectly double-counted shared NUMA nodes. For example, if CPUs and GPUs were both on NUMA 0, the score was 1+1=2 — the same as if they were on different NUMA nodes. Fix by computing the union of CPU and GPU NUMA bitmasks before counting. Co-located CPU+GPU on the same NUMA node now correctly scores 1. Rename helper functions to return bitmask.BitMask instead of int: - getNumaNodeCntForCPUID -> getNumaMaskForCPUID - getNumaNodeCntForGPUID -> getNumaMaskForGPUID Fixes volcano-sh#4998 Signed-off-by: pmady <pavan4devops@gmail.com>
|
Thanks for your contribution! Will take a look 👍 , also /cc @zjj2wry |
Remove redundant nil check before len() on GPUDetail map. len() for nil maps is defined as zero in Go. Fixes volcano-sh#4998 Signed-off-by: pmady <pavan4devops@gmail.com>
|
Keywords which can automatically close issues and at(@) or hashtag(#) mentions are not allowed in commit messages. The list of commits with invalid commit messages:
DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
There was a problem hiding this comment.
Hello 👋 ,
Thank you for the contribution!
This is quite a wasp nest you put your hand in. 😄 I don't think that an implementation without any runtime plumbing makes sense here currently (without proper NumaTopology CRD extension, population from exporter, scheduler cache translation...). Without this, the new provider cannot be effective in real clusters.
Also, maybe can we avoid hardcoding "nvidia.com/gpu" in the provider design and make it resource-name driven, with general GPUs so other accelerators (e.g. Ascend) can be supported with the same framework? NVIDIA-first implementation is fine, but the abstraction should remain generic.
But first things first. Probably we should update, refactor and implement this feature in resource-exporter at least for nvidia GPUs, and maybe in the scheduler. After that we can revisit this PR.
| @@ -0,0 +1,186 @@ | |||
| /* | |||
| Copyright 2026 The Volcano Authors. | |||
| // NumatopoInfo is the information about topology manager on the node | ||
| type NumatopoInfo struct { | ||
| Namespace string | ||
| Name string | ||
| Policies map[nodeinfov1alpha1.PolicyName]string | ||
| NumaResMap map[string]*ResourceInfo | ||
| CPUDetail topology.CPUDetails | ||
| GPUDetail GPUDetails | ||
| ResReserved v1.ResourceList | ||
| } |
There was a problem hiding this comment.
This is fully missing, the feature basically has no runtime plumbing.
|
Hi @hajnalmt, thank you for the thorough review — you are absolutely right, and this is exactly the feedback I needed. Acknowledging the GapI agree that the scheduler-side logic alone does not make sense without the full runtime plumbing. After reviewing the
I should have started from the data-source side. I have now implemented the missing pieces: PRs Created
On the Vendor-Agnostic QuestionGood point about not hard-coding
Which direction do you prefer? I am leaning toward option 2 (concrete first, abstract later) but happy to go either way. I will keep this PR open and rebase it once the apis and resource-exporter PRs are merged. Please let me know if the approach looks good! |
|
@pmady: GitHub didn't allow me to request PR reviews from the following users: zjj2wry. Note that only volcano-sh members and repo collaborators can review this PR, and authors cannot review their own PRs. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
What type of PR is this?
/kind feature
What this PR does / why we need it
This PR extends the
numaawarescheduler plugin to support GPU NUMA topology awareness. Currently, the plugin only considers CPU NUMA topology when scheduling workloads. For GPU workloads (especially multi-GPU training and LLM inference), cross-NUMA GPU placement causes measurable performance degradation due to NVLink and PCIe traffic crossing NUMA boundaries.Changes
1. API: Add GPU topology types (
pkg/scheduler/api/numa_info.go)GPUInfoandGPUDetailstypes for GPU-to-NUMA node mappingNUMANodes()andGPUsInNUMANodes()helper methods onGPUDetailsGPUDetailfield toNumatopoInfostructDeepCopyto include GPU topology data2. GPU HintProvider (
pkg/scheduler/plugins/numaaware/provider/gpumanager/)policy.HintProviderinterface fornvidia.com/gpuresourcescpumanagerprovider3. Plugin registration and scoring (
pkg/scheduler/plugins/numaaware/numaaware.go)gpumanager.NewProvider()alongside existingcpumanager.NewProvider()getNodeNumaNumForTaskscoring to include GPU NUMA node countgetNumaNodeCntForGPUIDhelper functionExample scenario (from #4998)
Node A: 8 GPUs, GPUs 0-3 on NUMA 0, GPUs 4-7 on NUMA 1, available: GPUs 0-2 on NUMA 0 + GPU 7 on NUMA 1
Node B: 8 GPUs, GPUs 0-3 on NUMA 0, GPUs 4-7 on NUMA 1, available: GPUs 0-3 all on NUMA 0
For a task requesting 4 GPUs:
Which issue(s) this PR fixes
Fixes #4998
Does this PR introduce a user-facing change?
Add GPU NUMA topology awareness to the numaaware scheduler plugin. The scheduler now considers GPU-to-NUMA node affinity when scheduling GPU workloads, preferring placements that minimize cross-NUMA GPU allocation for better performance.
How Has This Been Tested?