-
Notifications
You must be signed in to change notification settings - Fork 150
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
API Sidecar request is very slow #1365
Comments
Above 1mil transactions today, cause of DotOrdinals inscriptions |
What polkadot api is this you are talking about? |
this is most likely a bug in sidecar too slow to handle all the extrinsics |
thanks for reporting, indeed, this seems to be an issue within API Sidecar, we are looking into it. |
We are facing the same issue, it seems right after a start the response time is already huge... but after sequential requests it becomes, much much worst.
We tried increasing a lot our cpu/memory and also |
This was addressed by our latest release. |
What were the performance test outcomes? We are using the new release & noFees param, but still see 2 second response times (major improvement from the 25-30 second responses) but it is substantially slower than the sub 1 second responses of the past. |
What version were you using before you updated to 17.3.3? From 17.3.2 -> 17.3.3 performance has been the only change we made. If I had to guess the reason you are seeing an increase in response time is because the average block size in terms of extrinsics has gone up dramatically. Just a day and a half ago the avg extrinsics size was probably in the low tens to single digits. Whereas now its averaging in the hundreds consistently. But in terms of sidecar if you were to go test against older blocks you will see an increase in performance. |
We went from 17.3.2 -> 17.3.3 during the start of the day for this. So we were purely in it for the performance gain.
Yeah, we noticed ordinal/inscription load on other networks too. However, the indirection with the api-sidecar has added an extra layer of complication since we're entirely reliant on it to translate between the Polkadot node. -- We run the api sidecar within the same pod next to the polkadot node in AWS EKS. From being a tiny sidecar, we allocated 8 GB req/ 16 GB limit of memory while the node has significantly less 4 GB / 8 GB. This is the only way I could think to increase the concurrent performance given the sidecar continues to perform at a pretty slow response rate. This is OK when we're not behind chain tip, but incredibly bad if we do fall behind as there's only so much we can squeeze out of each pod. The node performance doesn't seem to have been impacted at all even though the sidecar node puts so much demand on it. That makes me think there's even more performance to be had here. We have both noFee & finalizedKey set. If we can get more performance, we can be healthier here. I was originally looking at #1361 before the report here accelerated some of that. We're happy to provide any other insights here that might help the team. I do realize with the holidays, this might be a challenge though. |
Since this morning, API requests have been very slow, previously very fast
What causes it.
polkadot version: v1.5.0
sidecar:v17.3.2
The server is configured with 12 -core CPU, 64G memory, 3T SSD
The text was updated successfully, but these errors were encountered: