feature: Inqueue timeout mechanism to prevent resource blocking by unschedulable PodGroups#5101
feature: Inqueue timeout mechanism to prevent resource blocking by unschedulable PodGroups#5101rishisulakhe wants to merge 1 commit intovolcano-sh:masterfrom
Conversation
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request addresses a critical issue in the Volcano scheduler where jobs could get stuck in the 'Inqueue' state, consuming queue quota without making progress and blocking other schedulable jobs. The solution introduces a new Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces a new inqueuetimeout plugin to address jobs getting stuck in the Inqueue state. The implementation includes new framework hooks (JobDequeueable/JobDequeued), adds condition tracking for when a job is enqueued, and updates resource accounting in the capacity and proportion plugins. The changes are well-structured and the new feature is a valuable addition to prevent resource starvation.
I have one suggestion regarding the implementation of Inqueued condition tracking to ensure its correctness when jobs are re-enqueued.
| job.PodGroup.Status.Conditions = append(job.PodGroup.Status.Conditions, scheduling.PodGroupCondition{ | ||
| Type: scheduling.PodGroupInqueuedType, | ||
| Status: v1.ConditionTrue, | ||
| LastTransitionTime: metav1.Now(), | ||
| Reason: "Enqueued", | ||
| Message: "PodGroup moved to Inqueue state", | ||
| }) |
There was a problem hiding this comment.
Appending a new Inqueued condition every time a job is enqueued can lead to multiple conditions of the same type. The getInqueueTimestamp function will always pick the first (and oldest) one, which will cause incorrect timeout calculations if a job is dequeued and then re-enqueued. You should use the existing ssn.UpdatePodGroupCondition helper function to either update the existing Inqueued condition or add a new one if it doesn't exist. This ensures the timestamp is always current.
| job.PodGroup.Status.Conditions = append(job.PodGroup.Status.Conditions, scheduling.PodGroupCondition{ | |
| Type: scheduling.PodGroupInqueuedType, | |
| Status: v1.ConditionTrue, | |
| LastTransitionTime: metav1.Now(), | |
| Reason: "Enqueued", | |
| Message: "PodGroup moved to Inqueue state", | |
| }) | |
| ssn.UpdatePodGroupCondition(job, &scheduling.PodGroupCondition{ | |
| Type: scheduling.PodGroupInqueuedType, | |
| Status: v1.ConditionTrue, | |
| LastTransitionTime: metav1.Now(), | |
| Reason: "Enqueued", | |
| Message: "PodGroup moved to Inqueue state", | |
| }) | |
There was a problem hiding this comment.
Pull request overview
This PR introduces an inqueue-timeout mechanism to prevent PodGroups from occupying queue quota indefinitely when they cannot make scheduling progress, by adding a new inqueuetimeout plugin and wiring dequeue hooks into the scheduler framework.
Changes:
- Add
PodGroupInqueuedTypecondition type and record an “Inqueued” timestamp when PodGroups enterInqueue. - Add new session plugin hooks (
JobDequeueable/JobDequeued) and invoke them from the enqueue action to dequeue timed-out inqueue jobs. - Update capacity/proportion plugins to release reserved “inqueue” resources when a job is dequeued; add unit tests for the new plugin.
Reviewed changes
Copilot reviewed 10 out of 10 changed files in this pull request and generated 3 comments.
Show a summary per file
| File | Description |
|---|---|
| staging/src/volcano.sh/apis/pkg/apis/scheduling/v1beta1/types.go | Adds PodGroupInqueuedType condition constant for the v1beta1 API. |
| staging/src/volcano.sh/apis/pkg/apis/scheduling/types.go | Adds PodGroupInqueuedType condition constant for the internal scheduling API. |
| pkg/scheduler/framework/session.go | Extends session state with new dequeue-related function registries. |
| pkg/scheduler/framework/session_plugins.go | Adds registration + invocation paths for JobDequeueable/JobDequeued hooks. |
| pkg/scheduler/actions/enqueue/enqueue.go | Dequeues timed-out Inqueue jobs and records an Inqueued condition/timestamp on enqueue. |
| pkg/scheduler/plugins/inqueuetimeout/inqueuetimeout.go | Implements the inqueuetimeout plugin logic (global + per-PodGroup timeout). |
| pkg/scheduler/plugins/inqueuetimeout/inqueuetimeout_test.go | Adds unit tests for dequeue voting and timestamp extraction behavior. |
| pkg/scheduler/plugins/capacity/capacity.go | Releases reserved inqueue resources on dequeue via AddJobDequeuedFn. |
| pkg/scheduler/plugins/proportion/proportion.go | Releases reserved inqueue resources on dequeue via AddJobDequeuedFn. |
| pkg/scheduler/plugins/factory.go | Registers the new inqueuetimeout plugin builder. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| Type: scheduling.PodGroupInqueuedType, | ||
| Status: v1.ConditionTrue, | ||
| LastTransitionTime: metav1.Now(), | ||
| Reason: "Enqueued", | ||
| Message: "PodGroup moved to Inqueue state", | ||
| }) |
There was a problem hiding this comment.
Setting the new PodGroupInqueuedType condition to Status: ConditionTrue will make the job controller treat it as the latest "non-scheduled" condition and emit Warning events (see recordPodGroupEvent which warns on any latest ConditionTrue that is not PodGroupScheduled). Consider setting this condition to ConditionFalse (or otherwise preventing it from being selected as an active warning condition) so enabling/disabling the timeout feature doesn’t introduce noisy Warning events.
| for _, cond := range job.PodGroup.Status.Conditions { | ||
| if cond.Type == scheduling.PodGroupInqueuedType { | ||
| return cond.LastTransitionTime.Time | ||
| } | ||
| } |
There was a problem hiding this comment.
getInqueueTimestamp() returns the first PodGroupInqueuedType condition it encounters. If multiple Inqueued conditions ever exist (e.g., due to append-on-enqueue), this will return a stale timestamp. Either enforce uniqueness of this condition when writing it, or update this helper to select the most recent matching condition (e.g., max LastTransitionTime).
| for _, cond := range job.PodGroup.Status.Conditions { | |
| if cond.Type == scheduling.PodGroupInqueuedType { | |
| return cond.LastTransitionTime.Time | |
| } | |
| } | |
| var ( | |
| latest time.Time | |
| found bool | |
| ) | |
| for _, cond := range job.PodGroup.Status.Conditions { | |
| if cond.Type == scheduling.PodGroupInqueuedType { | |
| t := cond.LastTransitionTime.Time | |
| if !found || t.After(latest) { | |
| latest = t | |
| found = true | |
| } | |
| } | |
| } | |
| if found { | |
| return latest | |
| } |
| // Record Inqueued condition with timestamp for inqueue timeout tracking | ||
| job.PodGroup.Status.Conditions = append(job.PodGroup.Status.Conditions, scheduling.PodGroupCondition{ | ||
| Type: scheduling.PodGroupInqueuedType, | ||
| Status: v1.ConditionTrue, | ||
| LastTransitionTime: metav1.Now(), | ||
| Reason: "Enqueued", | ||
| Message: "PodGroup moved to Inqueue state", | ||
| }) |
There was a problem hiding this comment.
Recording the Inqueued timestamp by blindly appending a new PodGroupInqueuedType condition can create multiple conditions of the same type over time (e.g., dequeue -> pending -> enqueue again). With the current getInqueueTimestamp() implementation (it returns the first match), a re-enqueued PodGroup may keep using an old timestamp and get immediately dequeued again. Use ssn.UpdatePodGroupCondition(...) (upsert by type) or otherwise ensure only one PodGroupInqueuedType exists / the latest timestamp is used.
| // Record Inqueued condition with timestamp for inqueue timeout tracking | |
| job.PodGroup.Status.Conditions = append(job.PodGroup.Status.Conditions, scheduling.PodGroupCondition{ | |
| Type: scheduling.PodGroupInqueuedType, | |
| Status: v1.ConditionTrue, | |
| LastTransitionTime: metav1.Now(), | |
| Reason: "Enqueued", | |
| Message: "PodGroup moved to Inqueue state", | |
| }) | |
| // Record Inqueued condition with timestamp for inqueue timeout tracking. | |
| // Use UpdatePodGroupCondition to upsert by type so only one Inqueued condition exists. | |
| cond := &scheduling.PodGroupCondition{ | |
| Type: scheduling.PodGroupInqueuedType, | |
| Status: v1.ConditionTrue, | |
| LastTransitionTime: metav1.Now(), | |
| Reason: "Enqueued", | |
| Message: "PodGroup moved to Inqueue state", | |
| } | |
| ssn.UpdatePodGroupCondition(job.PodGroup, cond) |
Signed-off-by: Rishi Prasad Sulakhe <rishiprasadsulakhe@gmail.com>
92f9367 to
ca053c0
Compare
Which issue(s) this PR fixes:
Fixes #5006 #4617
Backgroud
Volcano's scheduler uses a logical queue-based resource view. When a job is enqueued, its
MinResourcesare reserved against the queue's capacity. However, actual schedulability dependson real node-level conditions that the enqueue check doesn't evaluate. This mismatch causes:
Solution
inqueuetimeoutplugin — votes to dequeue PodGroups that have been Inqueue longer than aconfigurable timeout without any pods being scheduled
JobDequeueable/JobDequeued) — mirrors the existingJobEnqueueable/JobEnqueuedpattern for clean plugin integrationPodGroupInqueuedTypecondition with timestamp whena PodGroup enters Inqueue, persisted in etcd across scheduler restarts
JobDequeuedFncallbacks tosubtract reserved inqueue resources when a job is dequeued
Configuration
Global default via plugin arguments:
Per-PodGroup override via annotation:
Disabled by default ;users opt in by enabling the plugin.