-
Notifications
You must be signed in to change notification settings - Fork 733
-
Notifications
You must be signed in to change notification settings - Fork 733
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
memory leak #750
Comments
It might not be a memory leak; it might just be the way Python's memory allocator works. |
I've also encountered seemingly unbounded memory growth slowly over time; but I could not pinpoint when exactly it happens, or the reason. I've only seen it 2-3 times, and after a restart it went away. In one case, the RES memory usage was going up by ~400 MB every day, for weeks; it was up to around 15 GB when I restarted it. (This was on Python 3.7, so not related to the known FD leak issue on 3.6) |
Today I've to restart the electrumX index process due to unknown memory leak which causes huge swap disk usage and the server (AWS EC2 instance, Linux 18.04 t3.medium) becomes almost unusable. I've upgraded electrumX from v1.8.12 => v1.9.5, python v3.7.1 => v3.7.2, peer discovery is disabled; will report if the issue exists in latest version. |
It could also be some kind of caching mechanism in RocksDB / LevelDB I guess. It would be good to know the O/S, backend DB and coin for those with these issues. |
Here is my report on AWS EC2 (Ubuntu 18.04.2): It seems after the latest electrumx/python upgrade, the system pattern has changed. I monitor the system disk volume (SSD) because the root cause of system busy and unusable is that RAM will be gradually consumed by memory leaking processes, the swap partition will be accessed heavily by system, which leads to higher read/write bandwidth and declined disk burst credit balance on EC2 cloudwatch monitoring details. The interesting part is that: if I run only 1 electrumX index docker instance on a machine, it will last 20 days before system busy, if I runs 2 electrumX docker instances on the same machine, it can last 10 days, if I runs 4 instances then I have to restart the index instances after 5 days. When the system becomes extremely busy and I managed to ssh to the server, I can see each python3 process (running in electrumX docker container) consumes more than 30% memory. It just consumes 1~3% memory when launched in first day, the memory footprint grows quickly after several days. I am so glad after the electrumX 1.9.5/python 3.7.2 upgrade and have been running for 4 days, the python3 process still consumes 1~3% memory and the disk access is currently in a very healthy status ;-) According to the previous pattern, the system will become busy tomorrow, I will report if it can survive this weekend. |
Issue Closed. |
I also am seeing a clear memory leak. |
Same as @shsmith I am experiencing a terrible memory leak on a server that is under application level DDOS. This is on a recent commit. I have to restart it around every 3 days, as that is the time the python process' RES memory usage reaches 40-50 GB (server has 64 GB and it ran OOM a few times actually). On another server I run that is not under attack, there isn't an issue. Is your memory leaking server also under attack @shsmith ? |
I'm suspecting this is still asyncio bugs in Python; some have been fixed in recent 3.7 or upcoming 3.8 I think |
I have been running ElectrumX v1.8.12 as index server of BTC / LTC on Ubuntu 18.04 for two months, it seems the memory consumption of electrumx keeps growing and I have to restart the electrumx servers every 2 weeks.
I am not sure but the root cause could be the aiohttp memory leak issue (aio-libs/aiohttp#3631). hopefully the aiohttp fix will solve the problem.
The text was updated successfully, but these errors were encountered: