-
Notifications
You must be signed in to change notification settings - Fork 123
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding particles to pset in MPI run #1193
Comments
Per the Issue reported in #1193
Thanks for reporting, @claudiofgcardoso. I think that I created a quick fix for this bug in #1194, can you check whether this works for you? Note though, that because every change in the |
Hi @erikvansebille, many thanks for the quick response and fix! I tested this fix for two case studies (both of them perform the execution and the writting of the pset in a daily loop):
I think the problem is the following: To determine which new particles are to be added, I need to check which new particles are not within the 3km range from the existing particles (that is devided along the 4 procs in this case). Because I'm doing this check in the 4 psets indepedently, the nr. of new particles to be added will be different from proc to proc, which then leads to this error. So, in order to determine correctly the nr of new particles to be released and thus have the same nr of new particles to add to pset, I would need to access the full pset. Does this make sense? And if so, is it feaseble? Cláudio |
Hi @claudiofgcardoso, good to hear that PR #1194 fixes your simple scenario of I'm afraid, however, that the more complicated scenario will be difficult to fix on the backend. And unfortunately we don't have the development capacity now to work on this for the foreseeable future. You're very welcome to try come up with your own fix (Parcels is a community code!), but be aware that MPI implementation is still a bit rudimentary. If you do want to give it a try, the code in An alternative is to work around this limitation in the code and use your scenario a) but then immediately remove the unwanted particles again? Or would that defeat the purpose? |
Hi @erikvansebille! Unfortunately I don't think I can contribute to Parcel's MPI implementation at the moment, as my knowledge on MPI is very limited. Also regarding your suggestion, I doubt that it would be effective because new particles added in proc 0 could pass the check in relation to existing particles in proc 0, but could still be in range of existing particles in other procs .. So I think that my only option is to run it in serial processing and be patient :) |
Hi @erikvansebille, I'm not sure if this error is related with this issue, but I suppose it is because I couldn't find any other similar report on this in repository... When the MPI run finished, I got the following error during pset.export():
Since my run has 1825 timesteps and I ran the sim. with 12 cpus, this may explain the enormous shape of the array being created (21892 / 1825 =~ 12). How can I solve this? |
I managed to export the temporary files to a netCDF using the script developed by @JamiePringle and shared in #1091 (many thanks for this!). So, considering that this should be fixed with the output migration of .npy to zarr #1199, will close the issue. |
Hello all,
I'm trying to run parcels by executing pset at every output dt (following the example provided in this tutorial) because I want to conditionally release particles every 5 days, depending if there isn't any existing particles in a range of 3 km from the new ones. Because of the size of the pset, I'm trying to do this in parallel but apparently I'm not able to add new particles to the existing pset, raising the following error:
Is there a way to close the pset from the parallel computing and maintain the object to add new particles and then distribute again these particles through the processos without pset.export() or pset.close()?
Cláudio
The text was updated successfully, but these errors were encountered: