Skip to content

ClickHouse/ch-bench

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

92 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Benchmarks

Totally unscientific and mostly unrealistic benchmark that ClickHouse/ch-go project uses to understand performance.

The main goal is to measure minimal client overhead (CPU, RAM) to read data, i.e. data blocks deserialization and transfer.

Please see Notes for more details about results.

SELECT number FROM system.numbers_mt LIMIT 500000000
500000000 rows in set. Elapsed: 0.503 sec.
Processed 500.07 million rows,
  4.00 GB (993.26 million rows/s., 7.95 GB/s.)

Note: due to row-oriented design of most libraries, overhead per single row is significantly higher, so results can be slightly surprising.

Name Time RAM Ratio
ClickHouse/ch-go (Go) 401ms 9M ~1x
clickhouse-client (C++) 387ms 91M ~1x
vahid-sohrabloo/chconn (Go) 472ms 9M ~1x
clickhouse-cpp (C++) 516ms 6.9M 1.47x
clickhouse_driver (Rust) 614ms 9M 1.72x
curl (C, HTTP) 3.7s 10M 9x
clickhouse-client (Java, HTTP) 6.4s 121M 16x
clickhouse-jdbc (Java, HTTP) 7.2s 120M 18x
loyd/clickhouse.rs (Rust, HTTP) 10s 7.2M 28x
uptrace (Go)1 22s 13M 55x
clickhouse-driver (Python) 37s 60M 106x
ClickHouse/clickhouse-go (Go)1 46.8s 23M 117x
mailru/go-clickhouse (Go, HTTP) 4m13s 13M 729x

See RESULTS.md and RESULTS.slow.md.

Keeping `ClickHouse/ch-go`, `clickhouse-client` and `vahid-sohrabloo/chconn` to `~1x`, they are mostly equal.

Notes

C++

Command Mean [ms] Min [ms] Max [ms] Relative
ClickHouse/ch-go 598.8 ± 92.2 356.9 792.8 1.07 ± 0.33
clickhouse-client 561.9 ± 149.5 387.8 1114.2 1.00
clickhouse-cpp 574.4 ± 35.9 523.3 707.4 1.02 ± 0.28

We are selecting best results, however C++ client has lower dispersion.

Maximum possible speed

I've measured my localhost performance using iperf3, getting 10 GiB/s, this correlates with top results.

For example, one of ClickHouse/ch-go results is 390ms 500000000 rows 4.0 GB 10 GB/s.

I've also implemented mock server in Go that simulates ClickHouse server to reduce overhead, because currently the main bottleneck in this test is server itself (and probably localhost). The ClickHouse/ch-go was able to achieve 257ms 500000000 rows 4.0 GB 16 GB/s which should be maximum possible burst result, but I'm not 100% sure.

On ClickHouse/ch-go micro-benchmarks I'm getting up to 27 GB/s, not accounting of any network overhead (i.e. inmemory).

Footnotes

  1. Uses reflection on row.Scan(&value) which causes additional overhead. 2