Replies: 1 comment 1 reply
-
if you have a list of indexes, you can use import numpy as np
import itertools
indices = xr.DataArray(np.array(list(itertools.combinations(range(2000), 2))), dims=("ID", "pair"))
reordered = ds.isel(ID=indices) of course, this will create an additional dimension of size 1999000, so you might have to use stats = some_function(reordered.isel(pair=0), reordered.isel(pair=1))
# or, even better:
stats = reordered.reduce(some_function, dims="pair")
stats = some_function(reordered, dims="pair") The last version can be implemented using |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I have a Lagrangian particle dataset:
which contains 2000 Lagrangian particles (
ID
). I need to perform relative dispersion calculation in which every two particles should be grouped into a pair of particles. For example, a pair of particles of IDs 0 and 1 (IDs for a single pair should not be the same). I can do this using:But number of pairs (2000*1999/2) is quite large so that the explicit loop is very slow. Just wondering if there is more efficient and elegant way of doing this? Or can I convert the dataset from (
time
,ID
) into (time
,pair
,ID
) so that thepair
dimension is all the combinations of two ID (which is convenient for iteration over all pairs)?It is the best the result
stat
is also of dimensions (time
pair
). But ifpair
is too large, one can do reduction likemean
to get average overpair
dimension.Beta Was this translation helpful? Give feedback.
All reactions