Skip to content

Conversation

@shjwudp
Copy link
Contributor

@shjwudp shjwudp commented Jan 21, 2026

What does this PR do ?

Changes in this PR:

  1. Add end-to-end test for Megatron-FSDP (MCore + ND-Parallel) integration.
  2. Fix optim and optim_grads sharding compatibility issues that occur with certain in-place input modules.

⚠️ For major changes (either in lines of code or in its impact), please make sure to first share a design doc with the team. If you're unsure what's the best way to do so, contact the @mcore-oncall.

Contribution process

flowchart LR
    A[Pre-checks] --> B[PR Tests]
    subgraph Code Review/Approval
        C1[Expert Review] --> C2[Final Review]
    end
    B --> C1
    C2 --> D[Merge]
Loading

Pre-checks

  • I want this PR in a versioned release and have added the appropriate Milestone (e.g., Core 0.8)
  • I have added relevant unit tests
  • I have added relevant functional tests
  • I have added proper typing to my code Typing guidelines
  • I have added relevant documentation
  • I have run the autoformatter.sh on my PR

Code review

The following process is enforced via the CODEOWNERS file for changes into megatron/core. For changes outside of megatron/core, it is up to the PR author whether or not to tag the Final Reviewer team.

For MRs into `main` branch

Feel free to message or comment the @mcore-oncall to help accelerate your merge into main. The less complex your PR is, the faster it will be approved and merged!

(Step 1): Add PR label Expert Review

(Step 2): Collect the expert reviewers reviews

  1. Attach the Expert Review label when your PR is ready for review.
  2. GitHub auto-assigns expert reviewers based on your changes. They will get notified and pick up your PR soon.

⚠️ Only proceed to the next step once all reviewers have approved, merge-conflict are resolved and the CI is passing.
Final Review might get declined if these requirements are not fulfilled.

(Step 3): Final Review

  1. Add Final Review label
  2. GitHub auto-assigns final reviewers based on your changes. They will get notified and pick up your PR soon.

(Optional Step 4): Cherry-pick into release branch

If this PR also needs to be merged into core_r* release branches, after this PR has been merged, select Cherry-pick to open a new PR into the release branch.

For MRs into `dev` branch The proposed review process for `dev` branch is under active discussion.

MRs are mergable after one approval by either eharper@nvidia.com or zijiey@nvidia.com.

Merging your PR

Any member of core-adlr and core-nemo will be able to merge your PR.

2. Fix M-FSDP `optim`, `optim_grads` compatible with inplace functions
@shjwudp shjwudp requested review from a team as code owners January 21, 2026 17:26
@copy-pr-bot
Copy link

copy-pr-bot bot commented Jan 21, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@ko3n1g ko3n1g requested a review from a team January 21, 2026 17:27
@shjwudp shjwudp changed the title Add end-to-end M-FSDP tests and fix optim and optim_grads sharding compatibility issues Add end-to-end tests for M-FSDP and ND-Parallel Jan 21, 2026
@shjwudp shjwudp added Expert Review Apply this label to indicate that your PR is ready for expert review. module: megatron-fsdp labels Jan 21, 2026
Copy link
Member

@cspades cspades left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code review finished, post-backward refactor looks very good to me!

Have not checked unit tests yet, will read later!

Comment on lines +984 to +986
param.register_post_accumulate_grad_hook(
lambda p: _process_post_backward_gradients([p])
)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a neat PyTorch native hook, I wonder if it is more tightly integrated with gradient computation streams in general, maybe it will fix many bugs that were not caught by a simple post-backward hook? 🙏🏻

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, our customized post-backward mechanism becomes ineffective for modules that perform in-place input modifications, so its applicability is limited — it can works for the FSDP modules we’ve specified not every torch module. The register_post_accumulate_grad_hook function nicely compensates for this limitation by providing a more accurate trigger right after gradient accumulation.

@ericharper ericharper added Final Review Apply this label to indicate that your PR is ready for final review. and removed Expert Review Apply this label to indicate that your PR is ready for expert review. labels Jan 22, 2026
@shjwudp
Copy link
Contributor Author

shjwudp commented Jan 23, 2026

/ok to test 451c17d

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Final Review Apply this label to indicate that your PR is ready for final review. module: megatron-fsdp

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants