Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support use in CUDA device-side code #17

Open
eyalroz opened this issue Aug 18, 2019 · 1 comment
Open

Support use in CUDA device-side code #17

eyalroz opened this issue Aug 18, 2019 · 1 comment
Labels
enhancement New feature or request

Comments

@eyalroz
Copy link

eyalroz commented Aug 18, 2019

nVIDIA's CUDA is a popular ecosystem for general-purpose GPU programming. Essentially, in CUDA, you write kernels to be executed on a GPU using a slightly-restricted variant of C++. However, you need to annotate functions which run the the GPU-device-side as __device__. (For constexpr function this can be skipped, but only with a certain compiler flag which shouldn't be relied upon.) Also, any host-side code of the standard C++ library is not usable.

I would like to ask that this span implementation be adapted for use with CUDA. I've done something similar for std::array, although there's a bit of unnecessary boilerplate in my additions there.

@tcbrindle
Copy link
Owner

Hi @eyalroz, thanks for your interest in this library.

I'm afraid I have no experience with CUDA and no access to Nvidia hardware, so this is not something I would be able to do myself. However I'd be happy to accept patches for CUDA support (suitably protected by macros of course) if you or someone else would like to add it.

@tcbrindle tcbrindle added the enhancement New feature or request label Sep 12, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants