-
Notifications
You must be signed in to change notification settings - Fork 3.5k
Add end-to-end tests for M-FSDP and ND-Parallel #3031
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
2. Fix M-FSDP `optim`, `optim_grads` compatible with inplace functions
optim and optim_grads sharding compatibility issues
cspades
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code review finished, post-backward refactor looks very good to me!
Have not checked unit tests yet, will read later!
| param.register_post_accumulate_grad_hook( | ||
| lambda p: _process_post_backward_gradients([p]) | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a neat PyTorch native hook, I wonder if it is more tightly integrated with gradient computation streams in general, maybe it will fix many bugs that were not caught by a simple post-backward hook? 🙏🏻
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, our customized post-backward mechanism becomes ineffective for modules that perform in-place input modifications, so its applicability is limited — it can works for the FSDP modules we’ve specified not every torch module. The register_post_accumulate_grad_hook function nicely compensates for this limitation by providing a more accurate trigger right after gradient accumulation.
|
/ok to test 451c17d |
What does this PR do ?
Changes in this PR:
optimandoptim_gradssharding compatibility issues that occur with certain in-place input modules.Contribution process
flowchart LR A[Pre-checks] --> B[PR Tests] subgraph Code Review/Approval C1[Expert Review] --> C2[Final Review] end B --> C1 C2 --> D[Merge]Pre-checks
Core 0.8)Code review
The following process is enforced via the CODEOWNERS file for changes into
megatron/core. For changes outside ofmegatron/core, it is up to the PR author whether or not to tag the Final Reviewer team.For MRs into `main` branch
Feel free to message or comment the @mcore-oncall to help accelerate your merge into main. The less complex your PR is, the faster it will be approved and merged!
(Step 1): Add PR label
Expert Review(Step 2): Collect the expert reviewers reviews
Expert Reviewlabel when your PR is ready for review.Final Review might get declined if these requirements are not fulfilled.
(Step 3): Final Review
Final Reviewlabel(Optional Step 4): Cherry-pick into release branch
If this PR also needs to be merged into
core_r*release branches, after this PR has been merged, selectCherry-pickto open a new PR into the release branch.For MRs into `dev` branch
The proposed review process for `dev` branch is under active discussion.MRs are mergable after one approval by either
eharper@nvidia.comorzijiey@nvidia.com.Merging your PR
Any member of core-adlr and
core-nemowill be able to merge your PR.