Add roi_align nondeterministic support for XPU#8931
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/vision/8931
Note: Links to docs will display an error until the docs builds have been completed. ❌ 4 New Failures, 4 Pending, 1 Unrelated FailureAs of commit 79da54b with merge base d462da2 ( NEW FAILURES - The following jobs have failed:
BROKEN TRUNK - The following job failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
Hi @frost-intel! Thank you for your pull request and welcome to our community. Action RequiredIn order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you. ProcessIn order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA. Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with If you have received this in error or have any questions, please contact us at cla@meta.com. Thanks! |
Summary: Co-authored-by: frost-intel <frost.mitchell@intelcom> Reviewed By: scotts Differential Revision: D77997052 fbshipit-source-id: 40237bcbf1e8e42ca800538490b1dd237ffd67d5
Fixes part of intel/torch-xpu-ops#1264, along with pytorch/pytorch#147541, to allow XPU device to use the non-deterministic compiled
_roi_alignop.