Skip to content

Commit

Permalink
Docs: move remaining pages with images to Hugo bundles (#1528)
Browse files Browse the repository at this point in the history
* Moved ruler doc to Hugo bundles

Signed-off-by: Marco Pracucci <marco@pracucci.com>

* Moved architecture doc to Hugo bundles

Signed-off-by: Marco Pracucci <marco@pracucci.com>

* Moved deployment modes doc to Hugo bundles

Signed-off-by: Marco Pracucci <marco@pracucci.com>

* Moved production tips doc to Hugo bundles

Signed-off-by: Marco Pracucci <marco@pracucci.com>

* Moved Mimir logo image

Signed-off-by: Marco Pracucci <marco@pracucci.com>

* Simplified paths

Signed-off-by: Marco Pracucci <marco@pracucci.com>
  • Loading branch information
pracucci committed Mar 22, 2022
1 parent 0500b51 commit 7e2b33d
Show file tree
Hide file tree
Showing 22 changed files with 49 additions and 49 deletions.
2 changes: 1 addition & 1 deletion docs/sources/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ keywords:

# Grafana Mimir Documentation

![Grafana Mimir](./images/mimir-logo.png)
![Grafana Mimir](mimir-logo.png)

Grafana Mimir is an open source software project that provides a scalable long-term storage for [Prometheus](https://prometheus.io). Some of the core strengths of Grafana Mimir include:

Expand Down
File renamed without changes
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,8 @@ The system has multiple horizontally scalable microservices that can run separat
Grafana Mimir microservices are called components.

Grafana Mimir's design compiles the code for all components into a single binary.
The `-target` parameter controls which component(s) that single binary will behave as. For those looking for a simple way to get started, Grafana Mimir can also be run in [monolithic mode]({{< relref "./deployment-modes.md#monolithic-mode" >}}), with all components running simultaneously in one process.
For more information, refer to [Deployment modes]({{< relref "./deployment-modes.md" >}}).
The `-target` parameter controls which component(s) that single binary will behave as. For those looking for a simple way to get started, Grafana Mimir can also be run in [monolithic mode]({{< relref "../deployment-modes/index.md#monolithic-mode" >}}), with all components running simultaneously in one process.
For more information, refer to [Deployment modes]({{< relref "../deployment-modes/index.md" >}}).

## Grafana Mimir components

Expand All @@ -25,7 +25,7 @@ Most components are stateless and do not require any data persisted between proc

[//]: # "Diagram source of write path at https://docs.google.com/presentation/d/1LemaTVqa4Lf_tpql060vVoDGXrthp-Pie_SQL7qwHjc/edit#slide=id.g11658e7e4c6_0_899"

![Architecture of Grafana Mimir's write path](../../images/write-path.svg)
![Architecture of Grafana Mimir's write path](write-path.svg)

Ingesters receive incoming samples from the distributors.
Each push request belongs to a tenant, and the ingester appends the received samples to the specific per-tenant TSDB that is stored on the local disk.
Expand All @@ -36,37 +36,37 @@ The per-tenant TSDB is lazily created in each ingester as soon as the first samp
The in-memory samples are periodically flushed to disk, and the WAL is truncated, when a new TSDB block is created.
By default, this occurs every two hours.
Each newly created block is uploaded to long-term storage and kept in the ingester until the configured `-blocks-storage.tsdb.retention-period` expires.
This gives [queriers]({{< relref "components/querier.md" >}}) and [store-gateways]({{< relref "components/store-gateway.md" >}}) enough time to discover the new block on the storage and download its index-header.
This gives [queriers]({{< relref "../components/querier.md" >}}) and [store-gateways]({{< relref "../components/store-gateway.md" >}}) enough time to discover the new block on the storage and download its index-header.

To effectively use the WAL, and to be able to recover the in-memory series if an ingester abruptly terminates, store the WAL to a persistent disk that can survive an ingester failure.
For example, when running in the cloud, include an AWS EBS volume or a GCP persistent disk.
If you are running the Grafana Mimir cluster in Kubernetes, you can use a StatefulSet with a persistent volume claim for the ingesters.
The location on the filesystem where the WAL is stored is the same location where local TSDB blocks (compacted from head) are stored. The location of the filesystem and the location of the local TSDB blocks cannot be decoupled.

For more information, refer to [timeline of block uploads]({{< relref "../running-production-environment/production-tips/#how-to-estimate--querierquery-store-after" >}}) and [Ingester]({{< relref "components/ingester.md" >}}).
For more information, refer to [timeline of block uploads]({{< relref "../../running-production-environment/production-tips/#how-to-estimate--querierquery-store-after" >}}) and [Ingester]({{< relref "../components/ingester.md" >}}).

#### Series sharding and replication

By default, each time series is replicated to three ingesters, and each ingester writes its own block to the long-term storage.
The [Compactor]({{< relref "components/compactor/index.md" >}}) merges blocks from multiple ingesters into a single block, and removes duplicate samples.
The [Compactor]({{< relref "../components/compactor/index.md" >}}) merges blocks from multiple ingesters into a single block, and removes duplicate samples.
Blocks compaction significantly reduces storage utilization.
For more information, refer to [Compactor]({{< relref "components/compactor/index.md" >}}) and [Production tips]({{< relref "../running-production-environment/production-tips.md" >}}).
For more information, refer to [Compactor]({{< relref "../components/compactor/index.md" >}}) and [Production tips]({{< relref "../../running-production-environment/production-tips.md" >}}).

### The read path

[//]: # "Diagram source of read path at https://docs.google.com/presentation/d/1LemaTVqa4Lf_tpql060vVoDGXrthp-Pie_SQL7qwHjc/edit#slide=id.g11658e7e4c6_2_6"

![Architecture of Grafana Mimir's read path](../../images/read-path.svg)
![Architecture of Grafana Mimir's read path](read-path.svg)

Queries coming into Grafana Mimir arrive at the [query-frontend]({{< relref "components/query-frontend" >}}). The query-frontend then splits queries over longer time ranges into multiple, smaller queries.
Queries coming into Grafana Mimir arrive at the [query-frontend]({{< relref "../components/query-frontend" >}}). The query-frontend then splits queries over longer time ranges into multiple, smaller queries.

The query-frontend next checks the results cache. If the result of a query has been cached, the query-frontend returns the cached results. Queries that cannot be answered from the results cache are put into an in-memory queue within the query-frontend.

> **Note:** If you run the optional [query-scheduler]({{< relref "components/query-scheduler" >}}) component, this queue is maintained in the query-scheduler instead of the query-frontend.
> **Note:** If you run the optional [query-scheduler]({{< relref "../components/query-scheduler" >}}) component, this queue is maintained in the query-scheduler instead of the query-frontend.
The queriers act as workers, pulling queries from the queue.

The queriers connect to the store-gateways and the ingesters to fetch all the data needed to execute a query. For more information about how the query is executed, refer to [querier]({{< relref "components/querier.md" >}}).
The queriers connect to the store-gateways and the ingesters to fetch all the data needed to execute a query. For more information about how the query is executed, refer to [querier]({{< relref "../components/querier.md" >}}).

After the querier executes the query, it returns the results to the query-frontend for aggregation. The query-frontend then returns the aggregated results to the client.

Expand All @@ -75,9 +75,9 @@ After the querier executes the query, it returns the results to the query-fronte
Prometheus instances scrape samples from various targets and push them to Grafana Mimir by using Prometheus’ [remote write API](https://prometheus.io/docs/prometheus/latest/storage/#remote-storage-integrations).
The remote write API emits batched [Snappy](https://google.github.io/snappy/)-compressed [Protocol Buffer](https://developers.google.com/protocol-buffers/) messages inside the body of an HTTP `PUT` request.

Mimir requires that each HTTP request has a header that specifies a tenant ID for the request. Request [authentication and authorization]({{< relref "../securing/authentication-and-authorization.md" >}}) are handled by an external reverse proxy.
Mimir requires that each HTTP request has a header that specifies a tenant ID for the request. Request [authentication and authorization]({{< relref "../../securing/authentication-and-authorization.md" >}}) are handled by an external reverse proxy.

Incoming samples (writes from Prometheus) are handled by the [distributor]({{< relref "#distributor" >}}), and incoming reads (PromQL queries) are handled by the [query frontend]({{< relref "#query-frontend" >}}).
Incoming samples (writes from Prometheus) are handled by the [distributor]({{< relref "../components/distributor.md" >}}), and incoming reads (PromQL queries) are handled by the [query frontend]({{< relref "../#query-frontend" >}}).

## Long-term storage

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ Disabling the bucket index is not recommended.

## Benefits

The [querier]({{< relref "../components/querier.md" >}}), [store-gateway]({{< relref "../components/store-gateway.md" >}}) and [ruler]({{< relref "../components/ruler.md" >}}) must have an almost up-to-date view of the storage bucket, in order to find the right blocks to lookup at query time (querier) and load block's [index-header]({{< relref "../binary-index-header.md" >}}) (store-gateway).
The [querier]({{< relref "../components/querier.md" >}}), [store-gateway]({{< relref "../components/store-gateway.md" >}}) and [ruler]({{< relref "../components/ruler/index.md" >}}) must have an almost up-to-date view of the storage bucket, in order to find the right blocks to lookup at query time (querier) and load block's [index-header]({{< relref "../binary-index-header.md" >}}) (store-gateway).
Because of this, they need to periodically scan the bucket to look for new blocks uploaded by ingester or compactor, and blocks deleted (or marked for deletion) by compactor.

When the bucket index is enabled, the querier, store-gateway, and ruler periodically look up the per-tenant bucket index instead of scanning the bucket via `list objects` operations.
Expand Down Expand Up @@ -47,7 +47,7 @@ The overhead introduced by keeping the bucket index updated is not signifcant.

## How it's used by the querier

At query time the [querier]({{< relref "../components/querier.md" >}}) and [ruler]({{< relref "../components/ruler.md" >}}) determine whether the bucket index for the tenant has already been loaded to memory.
At query time the [querier]({{< relref "../components/querier.md" >}}) and [ruler]({{< relref "../components/ruler/index.md" >}}) determine whether the bucket index for the tenant has already been loaded to memory.
If not, the querier and ruler download it from the storage and cache it.

Because the bucket index is a small file, lazy downloading it doesn't have a significant impact on first query performances, but it does allow a querier to get up and running without pre-downloading every tenant's bucket index.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ weight: 100
# (Optional) Grafana Mimir Alertmanager

The Mimir Alertmanager adds multi-tenancy support and horizontal scalability to the [Prometheus Alertmanager](https://prometheus.io/docs/alerting/alertmanager/).
The Mimir Alertmanager is an optional component that accepts alert notifications from the [Mimir ruler]({{< relref "ruler.md" >}}).
The Mimir Alertmanager is an optional component that accepts alert notifications from the [Mimir ruler]({{< relref "ruler/index.md" >}}).
The Alertmanager deduplicates and groups alert notifications, and routes them to a notification channel, such as email, PagerDuty, or OpsGenie.

## Multi-tenancy
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ The compactor increases query performance and reduces long-term storage usage by
The compactor is the component responsible for:

- Compacting multiple blocks of a given tenant into a single, optimized larger block. This deduplicates chunks and reduces the size of the index, resulting in reduced storage costs. Querying fewer blocks is faster, so it also increases query speed.
- Keeping the per-tenant bucket index updated. The [bucket index]({{< relref "../../bucket-index/index.md" >}}) is used by [queriers]({{< relref "../querier.md" >}}), [store-gateways]({{< relref "../store-gateway.md" >}}), and [rulers]({{< relref "../ruler.md" >}}) to discover both new blocks and deleted blocks in the storage.
- Keeping the per-tenant bucket index updated. The [bucket index]({{< relref "../../bucket-index/index.md" >}}) is used by [queriers]({{< relref "../querier.md" >}}), [store-gateways]({{< relref "../store-gateway.md" >}}), and [rulers]({{< relref "../ruler/index.md" >}}) to discover both new blocks and deleted blocks in the storage.
- Deleting blocks that are no longer within a configurable retention period.

The compactor is stateless.
Expand Down Expand Up @@ -132,4 +132,4 @@ Alternatively, assuming the largest `-compactor.block-ranges` is `24h` (the defa
Refer to the [compactor](../../../configuring/reference-configuration-parameters/#compactor)
block section and the [limits](../../../configuring/reference-configuration-parameters/#limits) block section for details of compaction-related configuration.

The [alertmanager]({{< relref "../alertmanager.md" >}}) and [ruler]({{< relref "../ruler.md" >}}) components can also use object storage to store their configurations and rules uploaded by users. In that case a separate bucket should be created to store alertmanager configurations and rules: using the same bucket between ruler/alertmanager and blocks will cause issues with the compactor.
The [alertmanager]({{< relref "../alertmanager.md" >}}) and [ruler]({{< relref "../ruler/index.md" >}}) components can also use object storage to store their configurations and rules uploaded by users. In that case a separate bucket should be created to store alertmanager configurations and rules: using the same bucket between ruler/alertmanager and blocks will cause issues with the compactor.
Original file line number Diff line number Diff line change
Expand Up @@ -12,16 +12,16 @@ Each tenant has a set of recording and alerting rules and can group those rules

[//]: # "Diagram source of ruler interactions https://docs.google.com/presentation/d/1LemaTVqa4Lf_tpql060vVoDGXrthp-Pie_SQL7qwHjc/edit#slide=id.g11658e7e4c6_0_938"

![Architecture of Grafana Mimir's ruler component](../../../images/ruler.svg)
![Architecture of Grafana Mimir's ruler component](ruler.svg)

## Recording rules

The ruler evaluates the expressions in the recording rules at regular intervals and writes the results back to the ingesters.
The ruler has a built-in querier that evaluates the PromQL expressions and a built-in distributor, so that it can write directly to the ingesters.
Configuration of the built-in querier and distributor uses their respective configuration parameters:

- [Querier]({{< relref "../../configuring/reference-configuration-parameters/index.md#querier" >}})
- [Distributor]({{< relref "../../configuring/reference-configuration-parameters/index.md#distributor" >}})
- [Querier]({{< relref "../../../configuring/reference-configuration-parameters/index.md#querier" >}})
- [Distributor]({{< relref "../../../configuring/reference-configuration-parameters/index.md#distributor" >}})

## Alerting rules

Expand All @@ -31,20 +31,20 @@ After the alert has been active for the entire `for` duration, it enters the **F
The ruler then notifies Alertmanagers of any **FIRING** (`firing`) alerts.

Configure the addresses of Alertmanagers with the `-ruler.alertmanager-url` flag, which supports the DNS service discovery format.
For more information about DNS service discovery, refer to [Supported discovery modes]({{< relref "../../configuring/about-dns-service-discovery.md" >}}).
For more information about DNS service discovery, refer to [Supported discovery modes]({{< relref "../../../configuring/about-dns-service-discovery.md" >}}).

## Sharding

The ruler supports multi-tenancy and horizontal scalability.
To achieve horizontal scalability, the ruler shards the execution of rules by rule groups.
Ruler replicas form their own [hash ring]({{< relref "../hash-ring/index.md" >}}) stored in the [KV store]({{< relref "../key-value-store.md" >}}) to divide the work of the executing rules.
Ruler replicas form their own [hash ring]({{< relref "../../hash-ring/index.md" >}}) stored in the [KV store]({{< relref "../../key-value-store.md" >}}) to divide the work of the executing rules.

To configure the rulers' hash ring, refer to [configuring hash rings]({{< relref "../../configuring/configuring-hash-rings.md" >}}).
To configure the rulers' hash ring, refer to [configuring hash rings]({{< relref "../../../configuring/configuring-hash-rings.md" >}}).

## HTTP configuration API

The ruler HTTP configuration API enables tenants to create, update, and delete rule groups.
For a complete list of endpoints and example requests, refer to [ruler]({{< relref "../../reference-http-api/_index.md#ruler" >}}).
For a complete list of endpoints and example requests, refer to [ruler]({{< relref "../../../reference-http-api/_index.md#ruler" >}}).

## State

Expand All @@ -55,13 +55,13 @@ The ruler supports the following backends:
- [Google Cloud Storage](https://cloud.google.com/storage/): `-ruler-storage.backend=gcs`
- [Microsoft Azure Storage](https://azure.microsoft.com/en-us/services/storage/): `-ruler-storage.backend=azure`
- [OpenStack Swift](https://wiki.openstack.org/wiki/Swift): `-ruler-storage.backend=swift`
- [Local storage]({{< relref "#local-storage" >}}): `-ruler-storage.backend=local`
- [Local storage]({{< relref "../#local-storage" >}}): `-ruler-storage.backend=local`

### Local storage

The `local` storage backend reads [Prometheus recording rules](https://prometheus.io/docs/prometheus/latest/configuration/recording_rules/) from the local filesystem.

> **Note:** Local storage is a read-only backend that does not support the creation and deletion of rules through the [Configuration API]({{< relref "#http-configuration-api" >}}).
> **Note:** Local storage is a read-only backend that does not support the creation and deletion of rules through the [Configuration API]({{< relref "../#http-configuration-api" >}}).
When all rulers have the same rule files, local storage supports ruler sharding.
To facilitate sharding in Kubernetes, mount a [Kubernetes ConfigMap](https://kubernetes.io/docs/concepts/configuration/configmap/) into every ruler pod.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ weight: 70
# Grafana Mimir store-gateway

The store-gateway component, which is stateful, queries blocks from [long-term storage]({{< relref "./_index.md#long-term-storage" >}}).
On the read path, the [querier]({{< relref "./querier.md" >}}) and the [ruler]({{< relref "./ruler.md">}}) use the store-gateway when handling the query, whether the query comes from a user or from when a rule is being evaluated.
On the read path, the [querier]({{< relref "./querier.md" >}}) and the [ruler]({{< relref "./ruler/index.md">}}) use the store-gateway when handling the query, whether the query comes from a user or from when a rule is being evaluated.

To find the right blocks to look up at query time, the store-gateway requires an almost up-to-date view of the bucket in long-term storage.
The store-gateway keeps the bucket view updated using one of the following options:
Expand Down
Loading

0 comments on commit 7e2b33d

Please sign in to comment.