Replace numba-cuda runtime dependency with cuda-core#589
Replace numba-cuda runtime dependency with cuda-core#589jameslamb merged 3 commits intorapidsai:mainfrom
Conversation
jameslamb
left a comment
There was a problem hiding this comment.
One question, approving so you can merge if it turns out I was wrong there.
dependencies.yaml
Outdated
| - pytest-asyncio>=1.0.0 | ||
| - pytest-rerunfailures!=16.0.0 # See https://github.com/pytest-dev/pytest-rerunfailures/issues/302 | ||
| - rapids-dask-dependency==26.4.*,>=0.0.0a0 | ||
| - numba-cuda>=0.22.1 |
There was a problem hiding this comment.
I understand why this is moving to a test-only dependency, but is it now safe to install only numba-cuda instead of numba-cuda[cu12] / numba-cuda[cu13] at test time?
I think we'd still want to preserve the structure that was remove above like this:
- output_types: [conda]
packages:
- *numba_cuda
specific:
- output_types: [requirements, pyproject]
matrices:
- matrix:
cuda: "12.*"
cuda_suffixed: "true"
packages:
- *numba_cuda_cu12
- matrix:
cuda: "13.*"
cuda_suffixed: "true"
packages:
- *numba_cuda_cu13
# fallback to numba-cuda with no extra CUDA packages if 'cuda_suffixed' isn't true
- matrix:
packages:
- *numba_cudaTo be sure we get the correct dependency pins based on major CUDA version.
There was a problem hiding this comment.
You're very much right, that was my mistake, thanks for catching it! Could you please check whether 0316334 correctly resolves it?
There was a problem hiding this comment.
yep that looks perfect, thanks!
@ me for an admin-merge once you get a ucxx-python-codeowners approval, if the check-nightly-ci isn't fixed yet.
|
Apparently I’m not in ucxx-python-codeowners? I think this looks good and I’m familiar with the pieces that are changing here. Feel free to request an admin merge if you think it’s had enough eyes. |
I'm happy to have you added if you want to be a reviewer, that would in fact be very welcome! I'm also satisfied with both your and James' reviews, so if @jameslamb is ok just go ahead and do an admin merge, since we will need it anyway. And thank you both for the reviews! |
|
Ok I'll admin-merge this and add Bradley as a codeowner in that group. |
Thanks so much James! |
Numba-cuda is planning to drop
DeviceNDArray(see NVIDIA/numba-cuda#546), and in UCXX (specifically only distributed-ucxx) we use them at runtime only if RMM is not available. There are no known use cases of distributed-ucxx where it's not installed as part of RAPIDS, which always brings RMM along, and thus removing numba-cuda should have no impact to users while simplifying UCXX maintenance.We will continue testing numba-cuda
DeviceNDArrays while support is not fully dropped, which may still be used without Dask/Distributed to provide CUDA support via__cuda_array_interface__.