-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Untimely event consumption may cause node OOM #5721
Comments
Does this issue only exist in the nodes where the event service is enabled? Will data loss occur if it is not optimized? |
@jwrct Event service includes native queue and event plugin. It doesn't exist in native queue, namely zmq, but exist in mongo and kafka plugin. Data loss may occur in above case. |
In manager.class, create a new thread that monitor capsule queue:
If the queue' size is too large, we will pause synchronizeing the block. Else it will resume. |
Another possible solution is to set a threshold MAX_QUEUE_SIZE (suppose 1000), when the length of the queue triggerCapsuleQueue is greater than MAX_QUEUE_SIZE, block processing is suspended; if it is less than MAX_QUEUE_SIZE, block processing is resumed.
If the queue is too large, sleep 2000 ms, block processing is paused . The value of MAX_QUEUE_SIZE comes from the number of events in 200 blocks within 10 minutes and needs to be actually measured further. |
Rationale
Java-tron can load event plug-ins through configuration files, currently including Mongo plug-ins and Kafka plug-ins. Plug-in implementation refers to https://github.com/tronprotocol/event-plugin. The node consumes events and writes them to Mongo through plug-in serialization or streaming to Kafka.
All events are cached through BlockingQueue:
There are multiple producers, such as org.tron.core.db.Manager#postTransactionTrigger, which writes the log in the transaction to the queue:
But there is only one consumer: org.tron.core.db.Manager#triggerCapsuleProcessLoop:
ProcessTrigger actually serializes events through the 7 APIs of the plug-in IPluginEventListener. If the consumption speed of consumers is much slower than that of producers, the queue may be backlogged. After a while, the node will experience frequent full gc, be unable to synchronize or provide external services, or even run out of memory and incur OOM, eventually leading to data loss.
Possible reasons for slow queue data consumption include:
Implementation
One possible way is to set the maximum and minimum threshold of the queue’s length. Start a monitoring thread, when the queue’s length exceeds the maximum value, this thread will suspend synchronization or broadcasting block, and timely remind users to deal with queue overflow problems; when the length is below the minimum value, it resumes synchronization.
The text was updated successfully, but these errors were encountered: