Skip to content

Commit

Permalink
Merge pull request #623 from grafana/gotjosh/update-mimir-prometheus-…
Browse files Browse the repository at this point in the history
…c10186e

Merge upstream prometheus/prometheus at `c10186e`
  • Loading branch information
gotjosh committed May 3, 2024
2 parents ee1b0be + 530e146 commit 352c7ff
Show file tree
Hide file tree
Showing 13 changed files with 130 additions and 114 deletions.
30 changes: 29 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,13 +2,41 @@

## unreleased

* [CHANGE] TSDB: Fix the predicate checking for blocks which are beyond the retention period to include the ones right at the retention boundary. #9633
* [CHANGE] Rules: Execute 1 query instead of N (where N is the number of alerts within alert rule) when restoring alerts. #13980
* [ENHANCEMENT] Rules: Add `rule_group_last_restore_duration_seconds` to measure the time it takes to restore a rule group. #13974
* [ENHANCEMENT] OTLP: Improve remote write format translation performance by using label set hashes for metric identifiers instead of string based ones. #14006 #13991
* [BUGFIX] OTLP: Don't generate target_info unless at least one identifying label is defined. #13991
* [BUGFIX] OTLP: Don't generate target_info unless there are metrics. #13991

## 2.52.0-rc.0 / 2024-04-22

* [CHANGE] TSDB: Fix the predicate checking for blocks which are beyond the retention period to include the ones right at the retention boundary. #9633
* [FEATURE] Kubernetes SD: Add a new metric `prometheus_sd_kubernetes_failures_total` to track failed requests to Kubernetes API. #13554
* [FEATURE] Kubernetes SD: Add node and zone metadata labels when using the endpointslice role. #13935
* [FEATURE] Azure SD/Remote Write: Allow usage of Azure authorization SDK. #13099
* [FEATURE] Alerting: Support native histogram templating. #13731
* [FEATURE] Linode SD: Support IPv6 range discovery and region filtering. #13774
* [ENHANCEMENT] PromQL: Performance improvements for queries with regex matchers. #13461
* [ENHANCEMENT] PromQL: Performance improvements when using aggregation operators. #13744
* [ENHANCEMENT] PromQL: Validate label_join destination label. #13803
* [ENHANCEMENT] Scrape: Increment `prometheus_target_scrapes_sample_duplicate_timestamp_total` metric on duplicated series during one scrape. #12933
* [ENHANCEMENT] TSDB: Many improvements in performance. #13742 #13673 #13782
* [ENHANCEMENT] TSDB: Pause regular block compactions if the head needs to be compacted (prioritize head as it increases memory consumption). #13754
* [ENHANCEMENT] Observability: Improved logging during signal handling termination. #13772
* [ENHANCEMENT] Observability: All log lines for drop series use "num_dropped" key consistently. #13823
* [ENHANCEMENT] Observability: Log chunk snapshot and mmaped chunk replay duration during WAL replay. #13838
* [ENHANCEMENT] Observability: Log if the block is being created from WBL during compaction. #13846
* [BUGFIX] PromQL: Fix inaccurate sample number statistic when querying histograms. #13667
* [BUGFIX] PromQL: Fix `histogram_stddev` and `histogram_stdvar` for cases where the histogram has negative buckets. #13852
* [BUGFIX] PromQL: Fix possible duplicated label name and values in a metric result for specific queries. #13845
* [BUGFIX] Scrape: Fix setting native histogram schema factor during scrape. #13846
* [BUGFIX] TSDB: Fix counting of histogram samples when creating WAL checkpoint stats. #13776
* [BUGFIX] TSDB: Fix cases of compacting empty heads. #13755
* [BUGFIX] TSDB: Count float histograms in WAL checkpoint. #13844
* [BUGFIX] Remote Read: Fix memory leak due to broken requests. #13777
* [BUGFIX] API: Stop building response for `/api/v1/series/` when the API request was cancelled. #13766
* [BUGFIX] promtool: Fix panic on `promtool tsdb analyze --extended` when no native histograms are present. #13976

## 2.51.2 / 2024-04-09

Bugfix release.
Expand Down
2 changes: 1 addition & 1 deletion Makefile.common
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ ifneq ($(shell command -v gotestsum 2> /dev/null),)
endif
endif

PROMU_VERSION ?= 0.15.0
PROMU_VERSION ?= 0.17.0
PROMU_URL := https://github.com/prometheus/promu/releases/download/v$(PROMU_VERSION)/promu-$(PROMU_VERSION).$(GO_BUILD_PLATFORM).tar.gz

SKIP_GOLANGCI_LINT :=
Expand Down
2 changes: 1 addition & 1 deletion VERSION
Original file line number Diff line number Diff line change
@@ -1 +1 @@
2.51.2
2.52.0-rc.0
60 changes: 1 addition & 59 deletions docs/querying/remote_read_api.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,63 +5,7 @@ sort_rank: 7

# Remote Read API

This is not currently considered part of the stable API and is subject to change
even between non-major version releases of Prometheus.

## Format overview

The API response format is JSON. Every successful API request returns a `2xx`
status code.

Invalid requests that reach the API handlers return a JSON error object
and one of the following HTTP response codes:

- `400 Bad Request` when parameters are missing or incorrect.
- `422 Unprocessable Entity` when an expression can't be executed
([RFC4918](https://tools.ietf.org/html/rfc4918#page-78)).
- `503 Service Unavailable` when queries time out or abort.

Other non-`2xx` codes may be returned for errors occurring before the API
endpoint is reached.

An array of warnings may be returned if there are errors that do
not inhibit the request execution. All of the data that was successfully
collected will be returned in the data field.

The JSON response envelope format is as follows:

```
{
"status": "success" | "error",
"data": <data>,
// Only set if status is "error". The data field may still hold
// additional data.
"errorType": "<string>",
"error": "<string>",
// Only if there were warnings while executing the request.
// There will still be data in the data field.
"warnings": ["<string>"]
}
```

Generic placeholders are defined as follows:

* `<rfc3339 | unix_timestamp>`: Input timestamps may be provided either in
[RFC3339](https://www.ietf.org/rfc/rfc3339.txt) format or as a Unix timestamp
in seconds, with optional decimal places for sub-second precision. Output
timestamps are always represented as Unix timestamps in seconds.
* `<series_selector>`: Prometheus [time series
selectors](basics.md#time-series-selectors) like `http_requests_total` or
`http_requests_total{method=~"(GET|POST)"}` and need to be URL-encoded.
* `<duration>`: [Prometheus duration strings](basics.md#time_durations).
For example, `5m` refers to a duration of 5 minutes.
* `<bool>`: boolean values (strings `true` and `false`).

Note: Names of query parameters that may be repeated end with `[]`.

## Remote Read API
> This is not currently considered part of the stable API and is subject to change even between non-major version releases of Prometheus.
This API provides data read functionality from Prometheus. This interface expects [snappy](https://github.com/google/snappy) compression.
The API definition is located [here](https://github.com/prometheus/prometheus/blob/master/prompb/remote.proto).
Expand All @@ -79,5 +23,3 @@ This returns a message that includes a list of raw samples.

These streamed chunks utilize an XOR algorithm inspired by the [Gorilla](http://www.vldb.org/pvldb/vol8/p1816-teller.pdf)
compression to encode the chunks. However, it provides resolution to the millisecond instead of to the second.


6 changes: 5 additions & 1 deletion rules/group.go
Original file line number Diff line number Diff line change
Expand Up @@ -703,6 +703,9 @@ func (g *Group) RestoreForState(ts time.Time) {
"stage", "Select",
"err", err,
)
// Even if we failed to query the `ALERT_FOR_STATE` series, we currently have no way to retry the restore process.
// So the best we can do is mark the rule as restored and let it eventually fire.
alertRule.SetRestored(true)
continue
}

Expand All @@ -714,7 +717,8 @@ func (g *Group) RestoreForState(ts time.Time) {

// No results for this alert rule.
if len(seriesByLabels) == 0 {
level.Debug(g.logger).Log("msg", "Failed to find a series to restore the 'for' state", labels.AlertName, alertRule.Name())
level.Debug(g.logger).Log("msg", "No series found to restore the 'for' state of the alert rule", labels.AlertName, alertRule.Name())
alertRule.SetRestored(true)
continue
}

Expand Down
3 changes: 3 additions & 0 deletions rules/manager_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -486,6 +486,9 @@ func TestForStateRestore(t *testing.T) {
return labels.Compare(got[i].Labels, got[j].Labels) < 0
})

// In all cases, we expect the restoration process to have completed.
require.Truef(t, newRule.Restored(), "expected the rule restoration process to have completed")

// Checking if we have restored it correctly.
switch {
case tt.noRestore:
Expand Down
40 changes: 26 additions & 14 deletions web/api/v1/api_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,6 @@ package v1

import (
"context"
"encoding/json"
"errors"
"fmt"
"io"
Expand All @@ -35,6 +34,7 @@ import (
"github.com/prometheus/prometheus/util/testutil"

"github.com/go-kit/log"
jsoniter "github.com/json-iterator/go"
"github.com/prometheus/client_golang/prometheus"
config_util "github.com/prometheus/common/config"
"github.com/prometheus/common/model"
Expand Down Expand Up @@ -910,6 +910,7 @@ func TestStats(t *testing.T) {
require.IsType(t, &QueryData{}, i)
qd := i.(*QueryData)
require.NotNil(t, qd.Stats)
json := jsoniter.ConfigCompatibleWithStandardLibrary
j, err := json.Marshal(qd.Stats)
require.NoError(t, err)
require.JSONEq(t, `{"custom":"Custom Value"}`, string(j))
Expand Down Expand Up @@ -1171,6 +1172,25 @@ func testEndpoints(t *testing.T, api *API, tr *testTargetRetriever, es storage.E
},
},
},
// Test empty vector result
{
endpoint: api.query,
query: url.Values{
"query": []string{"bottomk(2, notExists)"},
},
responseAsJSON: `{"resultType":"vector","result":[]}`,
},
// Test empty matrix result
{
endpoint: api.queryRange,
query: url.Values{
"query": []string{"bottomk(2, notExists)"},
"start": []string{"0"},
"end": []string{"2"},
"step": []string{"1"},
},
responseAsJSON: `{"resultType":"matrix","result":[]}`,
},
// Missing query params in range queries.
{
endpoint: api.queryRange,
Expand Down Expand Up @@ -2891,10 +2911,13 @@ func testEndpoints(t *testing.T, api *API, tr *testTargetRetriever, es storage.E
if test.zeroFunc != nil {
test.zeroFunc(res.data)
}
assertAPIResponse(t, res.data, test.response)
if test.response != nil {
assertAPIResponse(t, res.data, test.response)
}
}

if test.responseAsJSON != "" {
json := jsoniter.ConfigCompatibleWithStandardLibrary
s, err := json.Marshal(res.data)
require.NoError(t, err)
require.JSONEq(t, test.responseAsJSON, string(s))
Expand Down Expand Up @@ -3292,18 +3315,7 @@ func TestRespondError(t *testing.T) {
require.Equal(t, want, have, "Return code %d expected in error response but got %d", want, have)
h := resp.Header.Get("Content-Type")
require.Equal(t, "application/json", h, "Expected Content-Type %q but got %q", "application/json", h)

var res Response
err = json.Unmarshal(body, &res)
require.NoError(t, err, "Error unmarshaling JSON body")

exp := &Response{
Status: statusError,
Data: "test",
ErrorType: errorTimeout,
Error: "message",
}
require.Equal(t, exp, &res)
require.JSONEq(t, `{"status": "error", "data": "test", "errorType": "timeout", "error": "message"}`, string(body))
}

func TestParseTimeParam(t *testing.T) {
Expand Down
75 changes: 51 additions & 24 deletions web/api/v1/json_codec.go
Original file line number Diff line number Diff line change
Expand Up @@ -25,11 +25,13 @@ import (
)

func init() {
jsoniter.RegisterTypeEncoderFunc("promql.Series", marshalSeriesJSON, marshalSeriesJSONIsEmpty)
jsoniter.RegisterTypeEncoderFunc("promql.Sample", marshalSampleJSON, marshalSampleJSONIsEmpty)
jsoniter.RegisterTypeEncoderFunc("promql.FPoint", marshalFPointJSON, marshalPointJSONIsEmpty)
jsoniter.RegisterTypeEncoderFunc("promql.HPoint", marshalHPointJSON, marshalPointJSONIsEmpty)
jsoniter.RegisterTypeEncoderFunc("exemplar.Exemplar", marshalExemplarJSON, marshalExemplarJSONEmpty)
jsoniter.RegisterTypeEncoderFunc("promql.Vector", unsafeMarshalVectorJSON, neverEmpty)
jsoniter.RegisterTypeEncoderFunc("promql.Matrix", unsafeMarshalMatrixJSON, neverEmpty)
jsoniter.RegisterTypeEncoderFunc("promql.Series", unsafeMarshalSeriesJSON, neverEmpty)
jsoniter.RegisterTypeEncoderFunc("promql.Sample", unsafeMarshalSampleJSON, neverEmpty)
jsoniter.RegisterTypeEncoderFunc("promql.FPoint", unsafeMarshalFPointJSON, neverEmpty)
jsoniter.RegisterTypeEncoderFunc("promql.HPoint", unsafeMarshalHPointJSON, neverEmpty)
jsoniter.RegisterTypeEncoderFunc("exemplar.Exemplar", marshalExemplarJSON, neverEmpty)
jsoniter.RegisterTypeEncoderFunc("labels.Labels", unsafeMarshalLabelsJSON, labelsIsEmpty)
}

Expand Down Expand Up @@ -66,8 +68,12 @@ func (j JSONCodec) Encode(resp *Response) ([]byte, error) {
// < more histograms >
// ],
// },
func marshalSeriesJSON(ptr unsafe.Pointer, stream *jsoniter.Stream) {
func unsafeMarshalSeriesJSON(ptr unsafe.Pointer, stream *jsoniter.Stream) {
s := *((*promql.Series)(ptr))
marshalSeriesJSON(s, stream)
}

func marshalSeriesJSON(s promql.Series, stream *jsoniter.Stream) {
stream.WriteObjectStart()
stream.WriteObjectField(`metric`)
marshalLabelsJSON(s.Metric, stream)
Expand All @@ -78,7 +84,7 @@ func marshalSeriesJSON(ptr unsafe.Pointer, stream *jsoniter.Stream) {
stream.WriteObjectField(`values`)
stream.WriteArrayStart()
}
marshalFPointJSON(unsafe.Pointer(&p), stream)
marshalFPointJSON(p, stream)
}
if len(s.Floats) > 0 {
stream.WriteArrayEnd()
Expand All @@ -89,15 +95,16 @@ func marshalSeriesJSON(ptr unsafe.Pointer, stream *jsoniter.Stream) {
stream.WriteObjectField(`histograms`)
stream.WriteArrayStart()
}
marshalHPointJSON(unsafe.Pointer(&p), stream)
marshalHPointJSON(p, stream)
}
if len(s.Histograms) > 0 {
stream.WriteArrayEnd()
}
stream.WriteObjectEnd()
}

func marshalSeriesJSONIsEmpty(unsafe.Pointer) bool {
// In the Prometheus API we render an empty object as `[]` or similar.
func neverEmpty(unsafe.Pointer) bool {
return false
}

Expand All @@ -122,8 +129,12 @@ func marshalSeriesJSONIsEmpty(unsafe.Pointer) bool {
// },
// "histogram": [ 1435781451.781, { < histogram, see jsonutil.MarshalHistogram > } ]
// },
func marshalSampleJSON(ptr unsafe.Pointer, stream *jsoniter.Stream) {
func unsafeMarshalSampleJSON(ptr unsafe.Pointer, stream *jsoniter.Stream) {
s := *((*promql.Sample)(ptr))
marshalSampleJSON(s, stream)
}

func marshalSampleJSON(s promql.Sample, stream *jsoniter.Stream) {
stream.WriteObjectStart()
stream.WriteObjectField(`metric`)
marshalLabelsJSON(s.Metric, stream)
Expand All @@ -145,13 +156,13 @@ func marshalSampleJSON(ptr unsafe.Pointer, stream *jsoniter.Stream) {
stream.WriteObjectEnd()
}

func marshalSampleJSONIsEmpty(unsafe.Pointer) bool {
return false
}

// marshalFPointJSON writes `[ts, "1.234"]`.
func marshalFPointJSON(ptr unsafe.Pointer, stream *jsoniter.Stream) {
func unsafeMarshalFPointJSON(ptr unsafe.Pointer, stream *jsoniter.Stream) {
p := *((*promql.FPoint)(ptr))
marshalFPointJSON(p, stream)
}

func marshalFPointJSON(p promql.FPoint, stream *jsoniter.Stream) {
stream.WriteArrayStart()
jsonutil.MarshalTimestamp(p.T, stream)
stream.WriteMore()
Expand All @@ -160,19 +171,19 @@ func marshalFPointJSON(ptr unsafe.Pointer, stream *jsoniter.Stream) {
}

// marshalHPointJSON writes `[ts, { < histogram, see jsonutil.MarshalHistogram > } ]`.
func marshalHPointJSON(ptr unsafe.Pointer, stream *jsoniter.Stream) {
func unsafeMarshalHPointJSON(ptr unsafe.Pointer, stream *jsoniter.Stream) {
p := *((*promql.HPoint)(ptr))
marshalHPointJSON(p, stream)
}

func marshalHPointJSON(p promql.HPoint, stream *jsoniter.Stream) {
stream.WriteArrayStart()
jsonutil.MarshalTimestamp(p.T, stream)
stream.WriteMore()
jsonutil.MarshalHistogram(p.H, stream)
stream.WriteArrayEnd()
}

func marshalPointJSONIsEmpty(unsafe.Pointer) bool {
return false
}

// marshalExemplarJSON writes.
//
// {
Expand Down Expand Up @@ -201,10 +212,6 @@ func marshalExemplarJSON(ptr unsafe.Pointer, stream *jsoniter.Stream) {
stream.WriteObjectEnd()
}

func marshalExemplarJSONEmpty(unsafe.Pointer) bool {
return false
}

func unsafeMarshalLabelsJSON(ptr unsafe.Pointer, stream *jsoniter.Stream) {
labelsPtr := (*labels.Labels)(ptr)
marshalLabelsJSON(*labelsPtr, stream)
Expand All @@ -229,3 +236,23 @@ func labelsIsEmpty(ptr unsafe.Pointer) bool {
labelsPtr := (*labels.Labels)(ptr)
return labelsPtr.IsEmpty()
}

// Marshal a Vector as `[sample,sample,...]` - empty Vector is `[]`.
func unsafeMarshalVectorJSON(ptr unsafe.Pointer, stream *jsoniter.Stream) {
v := *((*promql.Vector)(ptr))
stream.WriteArrayStart()
for _, s := range v {
marshalSampleJSON(s, stream)
}
stream.WriteArrayEnd()
}

// Marshal a Matrix as `[series,series,...]` - empty Matrix is `[]`.
func unsafeMarshalMatrixJSON(ptr unsafe.Pointer, stream *jsoniter.Stream) {
m := *((*promql.Matrix)(ptr))
stream.WriteArrayStart()
for _, s := range m {
marshalSeriesJSON(s, stream)
}
stream.WriteArrayEnd()
}
Loading

0 comments on commit 352c7ff

Please sign in to comment.