Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bucket verify: repair out of order labels #964

Merged

Conversation

jjneely
Copy link
Contributor

@jjneely jjneely commented Mar 22, 2019

Detected and worked around in #953 this PR introduces code into the repair function that should repair affected TSDB blocks.

Changes

  • When repairing index issues be sure to sort the label sets

@jjneely
Copy link
Contributor Author

jjneely commented Mar 22, 2019

Running this code gives me the following error, which I think is from Prometheus or the TSDB. Perhaps I need to make sure the requirements have the recent patch to fix this issue upstream.

prometheus/prometheus#5372

More digging next week. Errors:

level=info ts=2019-03-22T20:05:35.843901175Z caller=index_issue.go:88 msg="downloading block for repair" id=01D3ANRRECZMWFP42BWWG34631 issue=index_issue
level=info ts=2019-03-22T20:12:12.523459861Z caller=index_issue.go:92 msg="downloaded block to be repaired" id=01D3ANRRECZMWFP42BWWG34631 issue=index_issue
level=info ts=2019-03-22T20:12:12.616630146Z caller=index_issue.go:94 msg="repairing block" id=01D3ANRRECZMWFP42BWWG34631 issue=index_issue
bucket verify command failed: verify: verify iter, issue index_issue: repair failed for block 01D3ANRRECZMWFP42BWWG34631: rewrite block: add series: out-of-order series added with label set "{DATA_CENTER=\"dal09\",__name__=\"DISCOVERED_ALERTS\",alertname=\"PuppetAgentOpenFdsHigh\",alertstate=\"healthy\",description=\"{{$labels.instance}} of job {{$labels.job}} has been elevated (over 80% of hard limit) for more than 5 minutes.\",instance=\"mesos-10-153-0-162-x2.prod.dal09.example.com:31060\",job=\"http_status-page_prod_site-svc-status-page_dal09\",monitor=\"dal09/sre/prod/prometheus\",runbook=\"https://wiki.example.com/display/siteops/number+of+open+files\",severity=\"page\",summary=\"{{$labels.instance}} puppet-agent open_fds high\"}"

When we have label sets that are not in the correct order, fixing that
changes the order of the series in the index.  So the index must be
rewritten in that new order.  This makes this repair tool take up a
bunch more memory, but produces blocks that verify correctly.
@jjneely jjneely force-pushed the jjneely/fix-repair-for-out-of-order-labels branch from cb51d71 to f09d3f5 Compare March 25, 2019 20:58
@jjneely
Copy link
Contributor Author

jjneely commented Mar 25, 2019

Latest patches correct the add series: out-of-order series error. When we fix the ordering of label sets, that affects the sort order of the series. So the index must be re-written in the new series order. This patch makes this take up more RAM to repair a block, but produces blocks that verify.

There seem to be other issues, the code to upload to a backup bucket seems broken.

https://github.com/jjneely/thanos/blob/jjneely/fix-repair-for-out-of-order-labels/pkg/block/block.go#L69

is asked to parse strings that look like safe-delete-01D6K5GG9VVGYWGKZMH5ATJXER758700572 which tosses errors.

The directory name must be the block ID name exactly to verify.  A temp
directory or random name will not work here.
@jjneely jjneely changed the title WIP: bucket verify: repair out of order labels bucket verify: repair out of order labels Mar 29, 2019
@jjneely
Copy link
Contributor Author

jjneely commented Mar 29, 2019

Ready for review. This correctly identifies the out of order labels, repairs the TSDB block, and does the safe-delete operation correctly.

Pointer/reference logic error was eliminating all chunks for a series in
a given TSDB block that wasn't the first chunk.  Chunks are now
referenced correctly via pointers.
@jjneely
Copy link
Contributor Author

jjneely commented Apr 1, 2019

The repaired TSDB blocks didn't seem to match the originals. They didn't have the correct number of samples and chunks. Digging through the ignore chunks code I found some pointer referencing issues that was causing the repair process to compare the same chunk to itself and to ignore it as a duplicate.

Copy link
Member

@povilasv povilasv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you add tests? It's a bit hard to understand what kind of impact this will have.

cmd/thanos/bucket.go Outdated Show resolved Hide resolved
@@ -559,9 +559,9 @@ func sanitizeChunkSequence(chks []chunks.Meta, mint int64, maxt int64, ignoreChk
var last *chunks.Meta

OUTER:
for _, c := range chks {
for i := range chks {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why the change?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very subtle. So we are remembering a pointer to the chunk c from the last iteration to compare it against the chunk in the current iteration. However, when we use for index, value := range slice the value is not a pointer into the slice. In fact its a new variable the current item of the slice is copied into. Which means our pointer based comparisons are broken -- they always compare the current chunk to itself as the address of the variable c doesn't change throughout the loop.

Using just a slice index here allows us to correctly store a pointer to the item of the slice from the last iteration and compare that to the chunk in the current iteration. Otherwise, this code was removing all chunks in the series other than the first one.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's document this :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Where is the right place to do so? Glad to do it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added comments in the code. If that's not the best place, let me know.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

amazing, nice catch!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

More like, why did the repair just lose all the data in by blocks?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very nice catch, the comment makes it clear when we read this in 3 months again :) 👍

@@ -85,10 +85,11 @@ func registerBucketVerify(m map[string]setupFunc, root *kingpin.CmdClause, name
var backupBkt objstore.Bucket
if len(backupconfContentYaml) == 0 {
if *repair {
return errors.Wrap(err, "repair is specified, so backup client is required")
return errors.Errorf("repair is specified, so backup client is required")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: some linters throw errors on things like this, I prefer errors.New here

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed. Thanks.

@@ -559,9 +559,9 @@ func sanitizeChunkSequence(chks []chunks.Meta, mint int64, maxt int64, ignoreChk
var last *chunks.Meta

OUTER:
for _, c := range chks {
for i := range chks {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's document this :)

Some linters catch errors.Errorf() as its not really part of the errors
package.
We're comparing items by pointers, using Go's range variables is
misleading here and we need not fall into the same trap.
@jjneely
Copy link
Contributor Author

jjneely commented Apr 16, 2019

CircleCI is failing on pkg/cluster with cluster_test.go:129: unexpected error: outdated metadata which doesn't seem related to these changes. Although I've seen CircleCI encounter this a few times, tests pass locally for me, so I think this is a flaky test.

?       github.com/improbable-eng/thanos        [no test files]
ok      github.com/improbable-eng/thanos/cmd/thanos     0.030s
ok      github.com/improbable-eng/thanos/pkg/alert      0.004s
ok      github.com/improbable-eng/thanos/pkg/block      0.058s
?       github.com/improbable-eng/thanos/pkg/block/metadata     [no test files]
ok      github.com/improbable-eng/thanos/pkg/cluster    6.320s
ok      github.com/improbable-eng/thanos/pkg/compact    0.200s
ok      github.com/improbable-eng/thanos/pkg/compact/downsample 0.072s

@povilasv
Copy link
Member

@jjneely I've rerun CI it passes, we are planning to get rid of gossip soon , so those flaky tests will go away.

Copy link
Member

@brancz brancz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pretty dense PR, but looks good after what's a bit of a nit, but we don't want to duplicate.

id := all.At()

if err := indexr.Series(id, &lset, &chks); err != nil {
return err
}
// Make sure labels are in sorted order
sort.Slice(lset, func(i, j int) bool {
return lset[i].Name < lset[j].Name
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Excellent. Thanks for that. I've updated the code to use sort.Sort()

This prevents us from having to re-implement label sorting.
@brancz
Copy link
Member

brancz commented Apr 18, 2019

lgtm 👍

@jjneely
Copy link
Contributor Author

jjneely commented Apr 18, 2019

Awesome, thanks!

Copy link
Member

@bwplotka bwplotka left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I like it but some small minor issues.

We are releasing rc.0 in couple of minutes, but don't worry, this should be fine to get into 0.4.0. Good work!

Thanks!

}
} else {
backupBkt, err = client.NewBucket(logger, backupconfContentYaml, reg, name)
// nil Prometheus registerer: don't create conflicting metrics
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not a good solution. It's essentially as easy as prometheus.WrapRegisterWithPrefix("backup_..., reg) (:

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But also, not sure if it matters as it is only batch jobs, no one looks on metrics ;p

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, otherwise we register the same metrics twice.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, you don't: It's essentially as easy as prometheus.WrapRegisterWithPrefix("backup_..., reg) (:

@@ -531,7 +531,7 @@ func IgnoreDuplicateOutsideChunk(_ int64, _ int64, last *chunks.Meta, curr *chun
// the current one.
if curr.MinTime != last.MinTime || curr.MaxTime != last.MaxTime {
return false, errors.Errorf("non-sequential chunks not equal: [%d, %d] and [%d, %d]",
last.MaxTime, last.MaxTime, curr.MinTime, curr.MaxTime)
last.MinTime, last.MaxTime, curr.MinTime, curr.MaxTime)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wow! that was super confusing indeed, thanks for spotting!

@@ -559,9 +559,14 @@ func sanitizeChunkSequence(chks []chunks.Meta, mint int64, maxt int64, ignoreChk
var last *chunks.Meta

OUTER:
for _, c := range chks {
// This compares the current chunk to the chunk from the last iteration
// by pointers. If we use "i, c := range cks" the variable c is a new
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// by pointers. If we use "i, c := range cks" the variable c is a new
// by pointers. If we use "i, c := range chks" the variable c is a new

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed

// This compares the current chunk to the chunk from the last iteration
// by pointers. If we use "i, c := range cks" the variable c is a new
// variable who's address doesn't change through the entire loop.
// The current element of the chks slice is copied into it. We must take
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// The current element of the chks slice is copied into it. We must take
// The current element of the chks slice is copied into it. We must take

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed.

}

return repl, nil
}

type seriesRepair struct {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not using just series type or name it series?

id := all.At()

if err := indexr.Series(id, &lset, &chks); err != nil {
return err
}
// Make sure labels are in sorted order
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing trailing period for comment.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed.

return errors.Wrap(all.Err(), "iterate series")
}

// sort the series -- if labels moved around the ordering will be different
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's always keep comments a full sentence.

Suggested change
// sort the series -- if labels moved around the ordering will be different
// Sort the series. If labels moved around the ordering will be different.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed.

return labels.Compare(series[i].lset, series[j].lset) < 0
})

// build new TSDB block
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wrong comment again (full sentence, please)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed.

@bwplotka
Copy link
Member

Retrying CI

@bwplotka bwplotka merged commit b8c9dcf into thanos-io:master Apr 18, 2019
smalldirector pushed a commit to smalldirector/thanos that referenced this pull request Jun 20, 2019
* query: cleanup store statuses as they come and go (thanos-io#910)

Signed-off-by: Adrien Fillon <adrien.fillon@cdiscount.com>

* [docs] Example of using official prometheus helm chart to deploy server with sidecar (thanos-io#1003)

* update documentation with an example of using official prometheus helm chart

Signed-off-by: Ivan Kiselev <ivan@messagebird.com>

* a little formatting to values

Signed-off-by: Ivan Kiselev <ivan@messagebird.com>

* satisfy PR comments

Signed-off-by: Ivan Kiselev <ivan@messagebird.com>

* Compact: group concurrency  (thanos-io#1010)

* compact: add concurrency to group compact

* add flag to controll the number of goroutines to use when compacting group

* update compact.md for group-compact-concurrency

* fixed: miss wg.Add()

* address CR

* regenerate docs

* use err group

* fix typo in flag description

* handle context

* set up workers in main loop

* move var initialisation

* remove debug log

* validate concurrency

* move comment

* warn -> error

* remove extra newline

* fix typo

* dns: Added miekgdns resolver as a hidden option to query and ruler. (thanos-io#1016)

Fixes: thanos-io#1015

Signed-off-by: Bartek Plotka <bwplotka@gmail.com>

* query: set default evaluation interval (thanos-io#1028)

Subqueries allows request with no [specified resolution](https://prometheus.io/blog/2019/01/28/subquery-support/).
Set it up and allow to configure default evaluation interval.

* store+compactor: pre-compute index cache during compaction (thanos-io#986)

Fixes first part of thanos-io#942

This changes allow to safe some startup & sync time in store gateway as it is no longer is needed to compute index-cache from block index on its own. For compatibility store GW still can do it, but it first checks bucket if there is index-cached uploaded already.

In the same time, compactor precomputes the index cache file on every compaction. To allow quicker addition of index cache files we added `--index.generate-missing-cache-file` flag, that if enabled precompute missing files on compactor startup. Note that it will take time and it's only one-off step per bucket.

Signed-off-by: Aleksei Semiglazov <xjewer@gmail.com>

* Added website for Thanos' docs using Hugo. (thanos-io#807)

Hosted in github pages.

Signed-off-by: adrien-f <adrien.fillon@gmail.com>
Signed-off-by: Bartek Plotka <bwplotka@gmail.com>

* gcs: Fixed scopes for inline ServiceAccount option. (thanos-io#1033)

Without this that option was unusable.

Signed-off-by: Bartek Plotka <bwplotka@gmail.com>

* Fixed root docs and liche is now checking root dir as well. (thanos-io#1040)

Signed-off-by: Bartek Plotka <bwplotka@gmail.com>

* storage docs: add detail about GCS policies and testing (thanos-io#1037)

* add more details about GCS policies and testing

* remove fixed names from exec command

* Prometheus library updated to v2.8.1 (thanos-io#1009)

* compact:  group concurrency improvements (thanos-io#1029)

* group concurrency improvements

* remove unnecessary error check

* add to wg in main goroutine

* receive: Add block shipping (thanos-io#1011)

* receive: Add retention flag for local tsdb storage (thanos-io#1046)

* querier: Add /api/v1/labels support (thanos-io#905)

* Feature: add /api/v1/labels support

Signed-off-by: jojohappy <sarahdj0917@gmail.com>

* Disabled gossip by default, marked all flags as deprecated. (thanos-io#1055)

+ changed small label.

Signed-off-by: Bartek Plotka <bwplotka@gmail.com>

* ruler: Fixed Chunk going out or Max Uint16. (thanos-io#1041)

Fixes thanos-io#1038

Signed-off-by: Bartek Plotka <bwplotka@gmail.com>

* store: azure: allow passing an endpoint parameter for specific regions (thanos-io#980)

Fix thanos-io#968

Signed-off-by: Adrien Fillon <adrien.fillon@cdiscount.com>

* feature: support POST method for series endpoint (thanos-io#1021)

Signed-off-by: Joseph Lee <joseph.t.lee@outlook.com>

* bucket verify: repair out of order labels (thanos-io#964)

* bucket verify: repair out of order labels

* verify repair: correctly order series in the index on rewrite

When we have label sets that are not in the correct order, fixing that
changes the order of the series in the index.  So the index must be
rewritten in that new order.  This makes this repair tool take up a
bunch more memory, but produces blocks that verify correctly.

* Fix the TSDB block safe-delete function

The directory name must be the block ID name exactly to verify.  A temp
directory or random name will not work here.

* verify repair: fix duplicate chunk detection

Pointer/reference logic error was eliminating all chunks for a series in
a given TSDB block that wasn't the first chunk.  Chunks are now
referenced correctly via pointers.

* PR feedback: use errors.Errorf() instead of fmt.Errorf()

* Use errors.New()

Some linters catch errors.Errorf() as its not really part of the errors
package.

* Liberally comment this for loop

We're comparing items by pointers, using Go's range variables is
misleading here and we need not fall into the same trap.

* Take advantage of sort.Interface

This prevents us from having to re-implement label sorting.

* PR Feedback: Comments are full sentences.

* Cut release 0.4.0-rc.0 (thanos-io#1017)

* Cut release 0.4.0-rc.0 🎉 🎉

NOTE: This is last release that has gossip.

Signed-off-by: Bartek Plotka <bwplotka@gmail.com>

Co-Authored-By: povilasv <p.versockas@gmail.com>

* Fixed crossbuild.

Signed-off-by: Bartek Plotka <bwplotka@gmail.com>

* ci: Env fixes. (thanos-io#1058)

Signed-off-by: Bartek Plotka <bwplotka@gmail.com>

* Removed bzr requirement for make crossbuild.

Signed-off-by: Bartek Plotka <bwplotka@gmail.com>

* Bump github.com/hashicorp/golang-lru from 0.5.0 to 0.5.1 (thanos-io#1051)

Bumps [github.com/hashicorp/golang-lru](https://github.com/hashicorp/golang-lru) from 0.5.0 to 0.5.1.
- [Release notes](https://github.com/hashicorp/golang-lru/releases)
- [Commits](hashicorp/golang-lru@v0.5.0...v0.5.1)

Signed-off-by: dependabot[bot] <support@dependabot.com>

* Initialze and correctly register all index cache metrics. (thanos-io#1069)

* store/cache: add more tests (thanos-io#1071)

*  Fixed Downsampling process; Fixed `runutil.CloseAndCaptureErr` (thanos-io#1070)

* runutil. Simplified CloseWithErrCapture.

Signed-off-by: Bartek Plotka <bwplotka@gmail.com>

* Fixed Downsampling process; Fixed runutil.CloseAndCaptureErr

Fixes thanos-io#1065

Root cause:
* runutil defered capture error function was not passing error properly so unit tests were passing, event though there was bug
* streamed block write index cache requires index file which was not closed (saved) properly yet. Closers need to be closed to perform this.

Signed-off-by: Bartek Plotka <bwplotka@gmail.com>

* objstore: Expose S3 region attribute (thanos-io#1060)

Minio is able to autodetect the region for cloud providers like AWS but the logic fails with Scaleway Object Storage solution.

Related issue on Minio: minio/mc#2570

* Fixed fetching go-bindata failed (thanos-io#1074)

* Fixed bug:
- fetching go-bindata failed.
- change the repo of go-bindata to github.com/go-bindata/go-bindata,
because old repo has been archived.
- pin the go-bindata as v3.3.1.

Signed-off-by: jojohappy <sarahdj0917@gmail.com>

* Add CHANGELOG

Signed-off-by: jojohappy <sarahdj0917@gmail.com>

* Remove CHANGELOG

Signed-off-by: jojohappy <sarahdj0917@gmail.com>

* add compare flags func to compare flags between prometheus and sidecar (thanos-io#838)

Original message:

* update documentation for a max/min block duration

add compare flags func to compare flags between prom and sidecar

* fix some nits


Functional change: now we check the configured flags (if possible) and error out if MinTime != MaxTime. We need to check this always since if that is not true then we will get overlapping blocks. Additionally, an error message is printed out if it is not equal to 2h (the recommended value).

* Ensured index cache is best effort, refactored tests, validated edge cases. (thanos-io#1073)

Fixes thanos-io#651

Current size also includes slice header.

Signed-off-by: Bartek Plotka <bwplotka@gmail.com>

* website: Moved to netlify. (thanos-io#1078)

Signed-off-by: Bartek Plotka <bwplotka@gmail.com>

* website: Fixing netlify. (thanos-io#1080)

Signed-off-by: Bartek Plotka <bwplotka@gmail.com>

* website: Added "founded by" footer. (thanos-io#1081)

Signed-off-by: Bartek Plotka <bwplotka@gmail.com>

* store/proxy: properly check if context has ended (thanos-io#1082)

How the code was before it could happen that we might receive some
series from the stream however by the time we'd send them back to the
reader, it would not read it anymore since the deadline would have been
exceeded.

Properly use a `select` here to get out of the goroutine if the deadline
has been exceeded.

Might potentially fix a problem where we see one goroutine hanging
constantly (and thus blocking from work being done):

```
goroutine profile: total 126
25 @ 0x42f62f 0x40502b 0x405001 0x404de5 0xe7435b 0x45cc41
	0xe7435a	github.com/improbable-eng/thanos/pkg/store.startStreamSeriesSet.func1+0x18a	/go/src/github.com/improbable-eng/thanos/pkg/store/proxy.go:318
```

* Cut release v0.4.0-rc.1 (thanos-io#1088)

Signed-off-by: Bartek Plotka <bwplotka@gmail.com>

* website: Removed ghpages handling; fixed docs; and status badge. (thanos-io#1084)

Signed-off-by: Bartek Plotka <bwplotka@gmail.com>

* Fix readme (thanos-io#1090)

* store: Compose indexCache properly allowing injection for testing purposes. (thanos-io#1098)

Signed-off-by: Bartek Plotka <bwplotka@gmail.com>

* website: add sponsor section on homepage (thanos-io#1062)

* website: Adjusted logos sizing and responsiveness. (thanos-io#1105)

Signed-off-by: Bartek Plotka <bwplotka@gmail.com>

* Add Monzo to "Used by" section 🎉 (thanos-io#1106)

* Compactor: remove malformed blocks after delay (thanos-io#1053)

* compactor removes malformed blocks after delay

* compactor removes malformed blocks after delay

* include missing file

* reuse existing freshness check

* fix comment

* remove unused var

* fix comment

* syncDelay -> consistencyDelay

* fix comment

* update flag description

* address cr

* fix dupliacte error handling

* minimum value for --consistency-delay

* update

* docs

* add test case

* move test to inmem bucket

* Add Utility Warehouse to "used by" section (thanos-io#1108)

* Add Utility Warehouse logo

* Make logo smaller

* website: add Adform as users (thanos-io#1109)

We use Thanos extensively as well so I have added Adform.

* Cut release v0.4.0 (thanos-io#1107)

Signed-off-by: Bartek Plotka <bwplotka@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants