-
Notifications
You must be signed in to change notification settings - Fork 53
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Features/1400 implement unfold operation similar to torch tensor unfold #1419
base: main
Are you sure you want to change the base?
Features/1400 implement unfold operation similar to torch tensor unfold #1419
Conversation
Thank you for the PR! |
…ilar_to_torch_Tensor_unfold
Thank you for the PR! |
1 similar comment
Thank you for the PR! |
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #1419 +/- ##
==========================================
+ Coverage 91.91% 91.93% +0.02%
==========================================
Files 80 80
Lines 11942 11973 +31
==========================================
+ Hits 10976 11007 +31
Misses 966 966
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
The tests on the CUDA-runner seem to hang at |
…> chunk_size more tests
…ilar_to_torch_Tensor_unfold
Thank you for the PR! |
Thank you for the PR! |
Thank you for the PR! |
Thank you for the PR! |
1 similar comment
Thank you for the PR! |
Thank you for the PR! |
On the Terrabyte cluster, using 8 processes on 2 nodes with 4 GPUs each I get the following error:
on CPU, everything seems to work (at least in |
Thank you for the PR! |
Thank you for the PR! |
…ided-halo Support one-sided halo for DNDarrays
Thank you for the PR! |
Thank you for the PR! |
…ilar_to_torch_Tensor_unfold
Thank you for the PR! |
…ilar_to_torch_Tensor_unfold
Thank you for the PR! |
…ilar_to_torch_Tensor_unfold
Thank you for the PR! |
…ilar_to_torch_Tensor_unfold
Thank you for the PR! |
@FOsterfeld there seems to be an error now on the CUDA runner. As it fails in unfold, its maybe not a random-CI-error due to overloaded runners but really sth in unfold |
…ilar_to_torch_Tensor_unfold
Thank you for the PR! |
1 similar comment
Thank you for the PR! |
Thank you for the PR! |
Thank you for the PR! |
1 similar comment
Thank you for the PR! |
There seems to be something wrong with the communication in DNDarray.get_halo(). Sometimes the halo that is sent from the last rank to the rank before is faulty. This happened irregularly in my tests without any randomization in the data, so maybe it occurs depending on the order in which the non-blocking halo-sends are fulfilled there. In 825979c I tested get_halo(prev=False) with blocking sends instead, this eliminated all errors but is obviously no final solution to the problem. |
Due Diligence
Description
Add the function unfold to the available manipulations. unfold(a, dimension, size, step) for a DNDarray a behaves like torch.Tensor.unfold.
Example:
Issue/s resolved: #1400
Changes proposed:
Type of change
Memory requirements
Performance
Does this change modify the behaviour of other functions? If so, which?
no