Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: LRUcache for block requests #630

Merged
merged 11 commits into from
Aug 21, 2021
Merged

fix: LRUcache for block requests #630

merged 11 commits into from
Aug 21, 2021

Conversation

TarikGul
Copy link
Member

@TarikGul TarikGul commented Aug 3, 2021

closes: #612

LRUcache implemented for a /blocks/head & /blocks/{blockId}.

The main benefit of having this LRUCache is making sure that when users are polling /blocks/head for the most recent block it doesnt make repeat requests for the same blockHash.

Copy link
Contributor

@insipx insipx left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

import { IBlock } from '../../types/responses';

export const initLRUCache = (): LRU<string, IBlock> => {
const config = { max: 2 };
Copy link
Member

@niklasad1 niklasad1 Aug 6, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what does max == 2 mean here? The two most recent blocks can be cached in memory?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yup, exactly. The max length of the cache wont exceed 2. This is actually kinda up in the air, but from a brief discussion with Zeke we figured 1-2 would be more sufficient for the needs of polling blocks/head.

@TarikGul
Copy link
Member Author

Before merging this too, I am going to confirm that when running stress tests, and benchmarks that this doesnt cause any residual issues for sidecar.

@TarikGul
Copy link
Member Author

After running benchmarks against 9.1.4 and this PR, there were no regressions:

v9.1.4

Running 2m test @ http://127.0.0.1:8080
  4 threads and 12 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     5.37s     2.49s   10.96s    67.65%
    Req/Sec     1.97      4.97    30.00     89.19%
  Latency Distribution
     50%    5.19s 
     75%    7.43s 
     90%    8.42s 
     99%   10.96s 
  267 requests in 2.00m, 292.36MB read
Requests/sec:      2.22
Transfer/sec:      2.43MB

Total completed requests: 267
Failed requests: 0
Timeouts: 0
Avg RequestTime(Latency):          5371.86ms
Max RequestTime(Latency):          10958.036ms
Min RequestTime(Latency):          188.677ms

This branch

Running 2m test @ http://127.0.0.1:8080
  4 threads and 12 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     5.21s     2.83s   13.46s    76.29%
    Req/Sec     2.62      5.79    30.00     85.00%
  Latency Distribution
     50%    4.64s 
     75%    7.65s 
     90%    8.13s 
     99%   13.46s 
  279 requests in 2.00m, 308.50MB read
Requests/sec:      2.32
Transfer/sec:      2.57MB

Total completed requests: 279
Failed requests: 0
Timeouts: 0
Avg RequestTime(Latency):          5206.54ms
Max RequestTime(Latency):          13458.607ms
Min RequestTime(Latency):          12.891ms

No regressions I can tell, ran two more sets of benchmarks and they all are similar.

@dvdplm any last thoughts before I merge this?

@TarikGul TarikGul merged commit 9f7a29f into master Aug 21, 2021
@TarikGul TarikGul deleted the tarik-lru-cache branch August 21, 2021 00:21
This pull request was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

GET /blocks/head is super slow, add alternative GET /blocks/head/summary
3 participants