You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanos, Prometheus and Golang version used:
Thanos v0.30.0
Object Storage Provider:
s3
What happened:
When I query metrics for the last 7 days, the store seems not responding to the query.
What you expected to happen:
After a restart of the Thanos store, it has the data.
How to reproduce it (as minimally and precisely as possible):
Difficult to explain how to reproduce 😕
I just launch the components and after severals days it doesn't work.
Full logs to relevant components:
Thanos Store logs stopped on 08/01/2023...
nc -v X.X.X.X 20786 Ncat: Version 7.50 ( https://nmap.org/ncat )
Ncat: Connected to X.X.X.X:20786.
@
Thanos Store is healthy and ready:
curl http://X.X.X.X:31425/-/healthy OK
curl http://X.X.X.X:31425/-/ready OK
Thanos Query logs:
level=debug ts=2023-08-09T08:53:11.586237681Z caller=proxy.go:282 component=proxy request="min_time:1691193300000 max_time:1691278800000 matchers:<name:\"instance\" value:\"toto:9100\" > matchers:<name:\"job\" value:\"node_exporter\" > matchers:<name:\"platform_name\" value:\"titi\" > matchers:<name:\"__name__\" value:\"node_memory_HardwareCorrupted_bytes\" > aggregates:COUNT aggregates:SUM step:1200000 " err="No StoreAPIs matched for this query" stores="store Addr: X.X.X.X:30374 LabelSets: {receive_cluster=\"titi\", replica=\"thanos-receive\", tenant_id=\"default-tenant\"} Mint: 1691539200005 Maxt: 9223372036854775807 filtered out: does not have data within this time period: [1691193300000,1691278800000]. Store time ranges: [1691539200005,9223372036854775807];store Addr: X.X.X.X:26727 LabelSets: {receive_cluster=\"titi\", replica=\"thanos-receive\", tenant_id=\"default-tenant\"} Mint: 1691539200005 Maxt: 9223372036854775807 filtered out: does not have data within this time period: [1691193300000,1691278800000]. Store time ranges: [1691539200005,9223372036854775807];store Addr: X.X.X.X:20786 LabelSets: {receive_cluster=\"titi\", replica=\"thanos-receive\", tenant_id=\"default-tenant\"} Mint: 1685522400451 Maxt: 1690804800000 filtered out: does not have data within this time period: [1691193300000,1691278800000]. Store time ranges: [1685522400451,1690804800000];store Addr: X.X.X.X:30945 LabelSets: {receive_cluster=\"titi\", replica=\"thanos-receive\", tenant_id=\"default-tenant\"} Mint: 1691539200005 Maxt: 9223372036854775807 filtered out: does not have data within this time period: [1691193300000,1691278800000]. Store time ranges: [1691539200005,9223372036854775807]"
level=debug ts=2023-08-09T08:53:11.586278282Z caller=proxy.go:282 component=proxy request="min_time:1691452500000 max_time:1691538000000 matchers:<name:\"instance\" value:\"toto:9100\" > matchers:<name:\"job\" value:\"node_exporter\" > matchers:<name:\"platform_name\" value:\"titi\" > matchers:<name:\"__name__\" value:\"node_memory_HardwareCorrupted_bytes\" > aggregates:COUNT aggregates:SUM step:1200000 " err="No StoreAPIs matched for this query" stores="store Addr: X.X.X.X:30374 LabelSets: {receive_cluster=\"titi\", replica=\"thanos-receive\", tenant_id=\"default-tenant\"} Mint: 1691539200005 Maxt: 9223372036854775807 filtered out: does not have data within this time period: [1691452500000,1691538000000]. Store time ranges: [1691539200005,9223372036854775807];store Addr: X.X.X.X:26727 LabelSets: {receive_cluster=\"titi\", replica=\"thanos-receive\", tenant_id=\"default-tenant\"} Mint: 1691539200005 Maxt: 9223372036854775807 filtered out: does not have data within this time period: [1691452500000,1691538000000]. Store time ranges: [1691539200005,9223372036854775807];store Addr: X.X.X.X:20786 LabelSets: {receive_cluster=\"titi\", replica=\"thanos-receive\", tenant_id=\"default-tenant\"} Mint: 1685522400451 Maxt: 1690804800000 filtered out: does not have data within this time period: [1691452500000,1691538000000]. Store time ranges: [1685522400451,1690804800000];store Addr: X.X.X.X:30945 LabelSets: {receive_cluster=\"titi\", replica=\"thanos-receive\", tenant_id=\"default-tenant\"} Mint: 1691539200005 Maxt: 9223372036854775807 filtered out: does not have data within this time period: [1691452500000,1691538000000]. Store time ranges: [1691539200005,9223372036854775807]"
Anything else we need to know: I use Nomad (1.6.1) and Consul (1.16.1) to launch Thanos components on different instances and I use bridge mode in Nomad jobs for the ports.
Thanos Receive have a TSDB retention of 4h.
Thanos, Prometheus and Golang version used:
Thanos v0.30.0
Object Storage Provider:
s3
What happened:
When I query metrics for the last 7 days, the store seems not responding to the query.
What you expected to happen:
After a restart of the Thanos store, it has the data.
How to reproduce it (as minimally and precisely as possible):
Difficult to explain how to reproduce 😕
I just launch the components and after severals days it doesn't work.
Full logs to relevant components:
Thanos Store logs stopped on 08/01/2023...
Thanos Query has the targets:
Thanos Store is responding on GRPC port:
Thanos Store is healthy and ready:
Thanos Query logs:
Anything else we need to know:
I use Nomad (1.6.1) and Consul (1.16.1) to launch Thanos components on different instances and I use bridge mode in Nomad jobs for the ports.
Thanos Receive have a TSDB retention of 4h.
Thanos Store configuration:
Thanos Query configuration:
Environment:
uname -a
): 3.10.0-1160.49.1.el7.x86_64 # 1 SMP Tue Nov 30 15:51:32 UTC 2021 x86_64 x86_64 x86_64 GNU/LinuxThe text was updated successfully, but these errors were encountered: