-
Notifications
You must be signed in to change notification settings - Fork 421
-
Notifications
You must be signed in to change notification settings - Fork 421
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Concurrent Map iteration and map write #1726
Comments
Thanks @rpemsel for reaching out. We got similar issue recently and this happens when listing and creation of new files are done concurrently on the same mount. While we fix this as soon as possible, we suggest you to avoid doing listing and creation of new files at the time time if possible. Thanks for your patience ! |
Hi @rpemsel, Thanks for reporting this issue! The fix has been merged into master and will be included in the March 2024 release. Thanks, |
This issue is resolved in GCSFuse v2.0.0. Please upgrade and reopen the issue if necessary. Thanks, |
Thanks for the fix, |
@sethiay I have one more question |
@tred77 Could you please bring this on https://github.com/GoogleCloudPlatform/gcs-fuse-csi-driver/ repository ? Thanks ! |
Describe the issue
A user of our platform that uses GCS FUSE via GCS FUSE CSI Driver reported an issue with the GCS Fuse part of the implementation. The sidecar container running GCS Fuse broke down with an error message of "Concurrent Map iteration and map write". Also see the following logs
downloaded-logs-20240219-062431.json.
According to the logs it looks a bit like an issue with finding a census for concurrent read/write processes. Unfortunately this issue cannot easily be reproduced.
System (please complete the following information):
Additional context
The issue occurred during high load read/write operations
SLO:
We strive to respond to all bug reports within 24 business hours.
The text was updated successfully, but these errors were encountered: