From 7e2b33d56a363aaa51dc1221ba6ac01b1018c7d2 Mon Sep 17 00:00:00 2001 From: Marco Pracucci Date: Tue, 22 Mar 2022 11:07:23 +0100 Subject: [PATCH] Docs: move remaining pages with images to Hugo bundles (#1528) * Moved ruler doc to Hugo bundles Signed-off-by: Marco Pracucci * Moved architecture doc to Hugo bundles Signed-off-by: Marco Pracucci * Moved deployment modes doc to Hugo bundles Signed-off-by: Marco Pracucci * Moved production tips doc to Hugo bundles Signed-off-by: Marco Pracucci * Moved Mimir logo image Signed-off-by: Marco Pracucci * Simplified paths Signed-off-by: Marco Pracucci --- docs/sources/_index.md | 2 +- docs/sources/{images => }/mimir-logo.png | Bin .../index.md} | 26 +++++++++--------- .../read-path.svg | 0 .../write-path.svg | 0 .../architecture/bucket-index/index.md | 4 +-- .../architecture/components/alertmanager.md | 2 +- .../components/compactor/index.md | 4 +-- .../components/{ruler.md => ruler/index.md} | 18 ++++++------ .../components/ruler}/ruler.svg | 0 .../architecture/components/store-gateway.md | 2 +- .../index.md} | 8 +++--- .../deployment-modes}/microservices-mode.svg | 0 .../deployment-modes}/monolithic-mode.svg | 0 .../scaled-monolithic-mode.svg | 0 .../architecture/hash-ring/index.md | 2 +- .../deploying-grafana-mimir/_index.md | 4 +-- .../operators-guide/getting-started/_index.md | 2 +- .../operators-guide/reference-glossary.md | 4 +-- .../planning-capacity.md | 2 +- .../avoid-querying-non-compacted-blocks.png | Bin .../index.md} | 18 ++++++------ 22 files changed, 49 insertions(+), 49 deletions(-) rename docs/sources/{images => }/mimir-logo.png (100%) rename docs/sources/operators-guide/architecture/{about-grafana-mimir-architecture.md => about-grafana-mimir-architecture/index.md} (76%) rename docs/sources/operators-guide/{images => architecture/about-grafana-mimir-architecture}/read-path.svg (100%) rename docs/sources/operators-guide/{images => architecture/about-grafana-mimir-architecture}/write-path.svg (100%) rename docs/sources/operators-guide/architecture/components/{ruler.md => ruler/index.md} (80%) rename docs/sources/operators-guide/{images => architecture/components/ruler}/ruler.svg (100%) rename docs/sources/operators-guide/architecture/{deployment-modes.md => deployment-modes/index.md} (88%) rename docs/sources/operators-guide/{images => architecture/deployment-modes}/microservices-mode.svg (100%) rename docs/sources/operators-guide/{images => architecture/deployment-modes}/monolithic-mode.svg (100%) rename docs/sources/operators-guide/{images => architecture/deployment-modes}/scaled-monolithic-mode.svg (100%) rename docs/sources/operators-guide/{images => running-production-environment/production-tips}/avoid-querying-non-compacted-blocks.png (100%) rename docs/sources/operators-guide/running-production-environment/{production-tips.md => production-tips/index.md} (86%) diff --git a/docs/sources/_index.md b/docs/sources/_index.md index b28c293c011..b3995a789c7 100644 --- a/docs/sources/_index.md +++ b/docs/sources/_index.md @@ -16,7 +16,7 @@ keywords: # Grafana Mimir Documentation -![Grafana Mimir](./images/mimir-logo.png) +![Grafana Mimir](mimir-logo.png) Grafana Mimir is an open source software project that provides a scalable long-term storage for [Prometheus](https://prometheus.io). Some of the core strengths of Grafana Mimir include: diff --git a/docs/sources/images/mimir-logo.png b/docs/sources/mimir-logo.png similarity index 100% rename from docs/sources/images/mimir-logo.png rename to docs/sources/mimir-logo.png diff --git a/docs/sources/operators-guide/architecture/about-grafana-mimir-architecture.md b/docs/sources/operators-guide/architecture/about-grafana-mimir-architecture/index.md similarity index 76% rename from docs/sources/operators-guide/architecture/about-grafana-mimir-architecture.md rename to docs/sources/operators-guide/architecture/about-grafana-mimir-architecture/index.md index d8208bd9f14..d6c583bbc69 100644 --- a/docs/sources/operators-guide/architecture/about-grafana-mimir-architecture.md +++ b/docs/sources/operators-guide/architecture/about-grafana-mimir-architecture/index.md @@ -12,8 +12,8 @@ The system has multiple horizontally scalable microservices that can run separat Grafana Mimir microservices are called components. Grafana Mimir's design compiles the code for all components into a single binary. -The `-target` parameter controls which component(s) that single binary will behave as. For those looking for a simple way to get started, Grafana Mimir can also be run in [monolithic mode]({{< relref "./deployment-modes.md#monolithic-mode" >}}), with all components running simultaneously in one process. -For more information, refer to [Deployment modes]({{< relref "./deployment-modes.md" >}}). +The `-target` parameter controls which component(s) that single binary will behave as. For those looking for a simple way to get started, Grafana Mimir can also be run in [monolithic mode]({{< relref "../deployment-modes/index.md#monolithic-mode" >}}), with all components running simultaneously in one process. +For more information, refer to [Deployment modes]({{< relref "../deployment-modes/index.md" >}}). ## Grafana Mimir components @@ -25,7 +25,7 @@ Most components are stateless and do not require any data persisted between proc [//]: # "Diagram source of write path at https://docs.google.com/presentation/d/1LemaTVqa4Lf_tpql060vVoDGXrthp-Pie_SQL7qwHjc/edit#slide=id.g11658e7e4c6_0_899" -![Architecture of Grafana Mimir's write path](../../images/write-path.svg) +![Architecture of Grafana Mimir's write path](write-path.svg) Ingesters receive incoming samples from the distributors. Each push request belongs to a tenant, and the ingester appends the received samples to the specific per-tenant TSDB that is stored on the local disk. @@ -36,37 +36,37 @@ The per-tenant TSDB is lazily created in each ingester as soon as the first samp The in-memory samples are periodically flushed to disk, and the WAL is truncated, when a new TSDB block is created. By default, this occurs every two hours. Each newly created block is uploaded to long-term storage and kept in the ingester until the configured `-blocks-storage.tsdb.retention-period` expires. -This gives [queriers]({{< relref "components/querier.md" >}}) and [store-gateways]({{< relref "components/store-gateway.md" >}}) enough time to discover the new block on the storage and download its index-header. +This gives [queriers]({{< relref "../components/querier.md" >}}) and [store-gateways]({{< relref "../components/store-gateway.md" >}}) enough time to discover the new block on the storage and download its index-header. To effectively use the WAL, and to be able to recover the in-memory series if an ingester abruptly terminates, store the WAL to a persistent disk that can survive an ingester failure. For example, when running in the cloud, include an AWS EBS volume or a GCP persistent disk. If you are running the Grafana Mimir cluster in Kubernetes, you can use a StatefulSet with a persistent volume claim for the ingesters. The location on the filesystem where the WAL is stored is the same location where local TSDB blocks (compacted from head) are stored. The location of the filesystem and the location of the local TSDB blocks cannot be decoupled. -For more information, refer to [timeline of block uploads]({{< relref "../running-production-environment/production-tips/#how-to-estimate--querierquery-store-after" >}}) and [Ingester]({{< relref "components/ingester.md" >}}). +For more information, refer to [timeline of block uploads]({{< relref "../../running-production-environment/production-tips/#how-to-estimate--querierquery-store-after" >}}) and [Ingester]({{< relref "../components/ingester.md" >}}). #### Series sharding and replication By default, each time series is replicated to three ingesters, and each ingester writes its own block to the long-term storage. -The [Compactor]({{< relref "components/compactor/index.md" >}}) merges blocks from multiple ingesters into a single block, and removes duplicate samples. +The [Compactor]({{< relref "../components/compactor/index.md" >}}) merges blocks from multiple ingesters into a single block, and removes duplicate samples. Blocks compaction significantly reduces storage utilization. -For more information, refer to [Compactor]({{< relref "components/compactor/index.md" >}}) and [Production tips]({{< relref "../running-production-environment/production-tips.md" >}}). +For more information, refer to [Compactor]({{< relref "../components/compactor/index.md" >}}) and [Production tips]({{< relref "../../running-production-environment/production-tips.md" >}}). ### The read path [//]: # "Diagram source of read path at https://docs.google.com/presentation/d/1LemaTVqa4Lf_tpql060vVoDGXrthp-Pie_SQL7qwHjc/edit#slide=id.g11658e7e4c6_2_6" -![Architecture of Grafana Mimir's read path](../../images/read-path.svg) +![Architecture of Grafana Mimir's read path](read-path.svg) -Queries coming into Grafana Mimir arrive at the [query-frontend]({{< relref "components/query-frontend" >}}). The query-frontend then splits queries over longer time ranges into multiple, smaller queries. +Queries coming into Grafana Mimir arrive at the [query-frontend]({{< relref "../components/query-frontend" >}}). The query-frontend then splits queries over longer time ranges into multiple, smaller queries. The query-frontend next checks the results cache. If the result of a query has been cached, the query-frontend returns the cached results. Queries that cannot be answered from the results cache are put into an in-memory queue within the query-frontend. -> **Note:** If you run the optional [query-scheduler]({{< relref "components/query-scheduler" >}}) component, this queue is maintained in the query-scheduler instead of the query-frontend. +> **Note:** If you run the optional [query-scheduler]({{< relref "../components/query-scheduler" >}}) component, this queue is maintained in the query-scheduler instead of the query-frontend. The queriers act as workers, pulling queries from the queue. -The queriers connect to the store-gateways and the ingesters to fetch all the data needed to execute a query. For more information about how the query is executed, refer to [querier]({{< relref "components/querier.md" >}}). +The queriers connect to the store-gateways and the ingesters to fetch all the data needed to execute a query. For more information about how the query is executed, refer to [querier]({{< relref "../components/querier.md" >}}). After the querier executes the query, it returns the results to the query-frontend for aggregation. The query-frontend then returns the aggregated results to the client. @@ -75,9 +75,9 @@ After the querier executes the query, it returns the results to the query-fronte Prometheus instances scrape samples from various targets and push them to Grafana Mimir by using Prometheus’ [remote write API](https://prometheus.io/docs/prometheus/latest/storage/#remote-storage-integrations). The remote write API emits batched [Snappy](https://google.github.io/snappy/)-compressed [Protocol Buffer](https://developers.google.com/protocol-buffers/) messages inside the body of an HTTP `PUT` request. -Mimir requires that each HTTP request has a header that specifies a tenant ID for the request. Request [authentication and authorization]({{< relref "../securing/authentication-and-authorization.md" >}}) are handled by an external reverse proxy. +Mimir requires that each HTTP request has a header that specifies a tenant ID for the request. Request [authentication and authorization]({{< relref "../../securing/authentication-and-authorization.md" >}}) are handled by an external reverse proxy. -Incoming samples (writes from Prometheus) are handled by the [distributor]({{< relref "#distributor" >}}), and incoming reads (PromQL queries) are handled by the [query frontend]({{< relref "#query-frontend" >}}). +Incoming samples (writes from Prometheus) are handled by the [distributor]({{< relref "../components/distributor.md" >}}), and incoming reads (PromQL queries) are handled by the [query frontend]({{< relref "../#query-frontend" >}}). ## Long-term storage diff --git a/docs/sources/operators-guide/images/read-path.svg b/docs/sources/operators-guide/architecture/about-grafana-mimir-architecture/read-path.svg similarity index 100% rename from docs/sources/operators-guide/images/read-path.svg rename to docs/sources/operators-guide/architecture/about-grafana-mimir-architecture/read-path.svg diff --git a/docs/sources/operators-guide/images/write-path.svg b/docs/sources/operators-guide/architecture/about-grafana-mimir-architecture/write-path.svg similarity index 100% rename from docs/sources/operators-guide/images/write-path.svg rename to docs/sources/operators-guide/architecture/about-grafana-mimir-architecture/write-path.svg diff --git a/docs/sources/operators-guide/architecture/bucket-index/index.md b/docs/sources/operators-guide/architecture/bucket-index/index.md index 141497366d2..a8e4d1f9dee 100644 --- a/docs/sources/operators-guide/architecture/bucket-index/index.md +++ b/docs/sources/operators-guide/architecture/bucket-index/index.md @@ -14,7 +14,7 @@ Disabling the bucket index is not recommended. ## Benefits -The [querier]({{< relref "../components/querier.md" >}}), [store-gateway]({{< relref "../components/store-gateway.md" >}}) and [ruler]({{< relref "../components/ruler.md" >}}) must have an almost up-to-date view of the storage bucket, in order to find the right blocks to lookup at query time (querier) and load block's [index-header]({{< relref "../binary-index-header.md" >}}) (store-gateway). +The [querier]({{< relref "../components/querier.md" >}}), [store-gateway]({{< relref "../components/store-gateway.md" >}}) and [ruler]({{< relref "../components/ruler/index.md" >}}) must have an almost up-to-date view of the storage bucket, in order to find the right blocks to lookup at query time (querier) and load block's [index-header]({{< relref "../binary-index-header.md" >}}) (store-gateway). Because of this, they need to periodically scan the bucket to look for new blocks uploaded by ingester or compactor, and blocks deleted (or marked for deletion) by compactor. When the bucket index is enabled, the querier, store-gateway, and ruler periodically look up the per-tenant bucket index instead of scanning the bucket via `list objects` operations. @@ -47,7 +47,7 @@ The overhead introduced by keeping the bucket index updated is not signifcant. ## How it's used by the querier -At query time the [querier]({{< relref "../components/querier.md" >}}) and [ruler]({{< relref "../components/ruler.md" >}}) determine whether the bucket index for the tenant has already been loaded to memory. +At query time the [querier]({{< relref "../components/querier.md" >}}) and [ruler]({{< relref "../components/ruler/index.md" >}}) determine whether the bucket index for the tenant has already been loaded to memory. If not, the querier and ruler download it from the storage and cache it. Because the bucket index is a small file, lazy downloading it doesn't have a significant impact on first query performances, but it does allow a querier to get up and running without pre-downloading every tenant's bucket index. diff --git a/docs/sources/operators-guide/architecture/components/alertmanager.md b/docs/sources/operators-guide/architecture/components/alertmanager.md index 0f4a0b3f591..4e3ee9af178 100644 --- a/docs/sources/operators-guide/architecture/components/alertmanager.md +++ b/docs/sources/operators-guide/architecture/components/alertmanager.md @@ -8,7 +8,7 @@ weight: 100 # (Optional) Grafana Mimir Alertmanager The Mimir Alertmanager adds multi-tenancy support and horizontal scalability to the [Prometheus Alertmanager](https://prometheus.io/docs/alerting/alertmanager/). -The Mimir Alertmanager is an optional component that accepts alert notifications from the [Mimir ruler]({{< relref "ruler.md" >}}). +The Mimir Alertmanager is an optional component that accepts alert notifications from the [Mimir ruler]({{< relref "ruler/index.md" >}}). The Alertmanager deduplicates and groups alert notifications, and routes them to a notification channel, such as email, PagerDuty, or OpsGenie. ## Multi-tenancy diff --git a/docs/sources/operators-guide/architecture/components/compactor/index.md b/docs/sources/operators-guide/architecture/components/compactor/index.md index 7f4fa80fbc1..bf678ef0da1 100644 --- a/docs/sources/operators-guide/architecture/components/compactor/index.md +++ b/docs/sources/operators-guide/architecture/components/compactor/index.md @@ -12,7 +12,7 @@ The compactor increases query performance and reduces long-term storage usage by The compactor is the component responsible for: - Compacting multiple blocks of a given tenant into a single, optimized larger block. This deduplicates chunks and reduces the size of the index, resulting in reduced storage costs. Querying fewer blocks is faster, so it also increases query speed. -- Keeping the per-tenant bucket index updated. The [bucket index]({{< relref "../../bucket-index/index.md" >}}) is used by [queriers]({{< relref "../querier.md" >}}), [store-gateways]({{< relref "../store-gateway.md" >}}), and [rulers]({{< relref "../ruler.md" >}}) to discover both new blocks and deleted blocks in the storage. +- Keeping the per-tenant bucket index updated. The [bucket index]({{< relref "../../bucket-index/index.md" >}}) is used by [queriers]({{< relref "../querier.md" >}}), [store-gateways]({{< relref "../store-gateway.md" >}}), and [rulers]({{< relref "../ruler/index.md" >}}) to discover both new blocks and deleted blocks in the storage. - Deleting blocks that are no longer within a configurable retention period. The compactor is stateless. @@ -132,4 +132,4 @@ Alternatively, assuming the largest `-compactor.block-ranges` is `24h` (the defa Refer to the [compactor](../../../configuring/reference-configuration-parameters/#compactor) block section and the [limits](../../../configuring/reference-configuration-parameters/#limits) block section for details of compaction-related configuration. -The [alertmanager]({{< relref "../alertmanager.md" >}}) and [ruler]({{< relref "../ruler.md" >}}) components can also use object storage to store their configurations and rules uploaded by users. In that case a separate bucket should be created to store alertmanager configurations and rules: using the same bucket between ruler/alertmanager and blocks will cause issues with the compactor. +The [alertmanager]({{< relref "../alertmanager.md" >}}) and [ruler]({{< relref "../ruler/index.md" >}}) components can also use object storage to store their configurations and rules uploaded by users. In that case a separate bucket should be created to store alertmanager configurations and rules: using the same bucket between ruler/alertmanager and blocks will cause issues with the compactor. diff --git a/docs/sources/operators-guide/architecture/components/ruler.md b/docs/sources/operators-guide/architecture/components/ruler/index.md similarity index 80% rename from docs/sources/operators-guide/architecture/components/ruler.md rename to docs/sources/operators-guide/architecture/components/ruler/index.md index de23dd3f2ac..f5497bc9c53 100644 --- a/docs/sources/operators-guide/architecture/components/ruler.md +++ b/docs/sources/operators-guide/architecture/components/ruler/index.md @@ -12,7 +12,7 @@ Each tenant has a set of recording and alerting rules and can group those rules [//]: # "Diagram source of ruler interactions https://docs.google.com/presentation/d/1LemaTVqa4Lf_tpql060vVoDGXrthp-Pie_SQL7qwHjc/edit#slide=id.g11658e7e4c6_0_938" -![Architecture of Grafana Mimir's ruler component](../../../images/ruler.svg) +![Architecture of Grafana Mimir's ruler component](ruler.svg) ## Recording rules @@ -20,8 +20,8 @@ The ruler evaluates the expressions in the recording rules at regular intervals The ruler has a built-in querier that evaluates the PromQL expressions and a built-in distributor, so that it can write directly to the ingesters. Configuration of the built-in querier and distributor uses their respective configuration parameters: -- [Querier]({{< relref "../../configuring/reference-configuration-parameters/index.md#querier" >}}) -- [Distributor]({{< relref "../../configuring/reference-configuration-parameters/index.md#distributor" >}}) +- [Querier]({{< relref "../../../configuring/reference-configuration-parameters/index.md#querier" >}}) +- [Distributor]({{< relref "../../../configuring/reference-configuration-parameters/index.md#distributor" >}}) ## Alerting rules @@ -31,20 +31,20 @@ After the alert has been active for the entire `for` duration, it enters the **F The ruler then notifies Alertmanagers of any **FIRING** (`firing`) alerts. Configure the addresses of Alertmanagers with the `-ruler.alertmanager-url` flag, which supports the DNS service discovery format. -For more information about DNS service discovery, refer to [Supported discovery modes]({{< relref "../../configuring/about-dns-service-discovery.md" >}}). +For more information about DNS service discovery, refer to [Supported discovery modes]({{< relref "../../../configuring/about-dns-service-discovery.md" >}}). ## Sharding The ruler supports multi-tenancy and horizontal scalability. To achieve horizontal scalability, the ruler shards the execution of rules by rule groups. -Ruler replicas form their own [hash ring]({{< relref "../hash-ring/index.md" >}}) stored in the [KV store]({{< relref "../key-value-store.md" >}}) to divide the work of the executing rules. +Ruler replicas form their own [hash ring]({{< relref "../../hash-ring/index.md" >}}) stored in the [KV store]({{< relref "../../key-value-store.md" >}}) to divide the work of the executing rules. -To configure the rulers' hash ring, refer to [configuring hash rings]({{< relref "../../configuring/configuring-hash-rings.md" >}}). +To configure the rulers' hash ring, refer to [configuring hash rings]({{< relref "../../../configuring/configuring-hash-rings.md" >}}). ## HTTP configuration API The ruler HTTP configuration API enables tenants to create, update, and delete rule groups. -For a complete list of endpoints and example requests, refer to [ruler]({{< relref "../../reference-http-api/_index.md#ruler" >}}). +For a complete list of endpoints and example requests, refer to [ruler]({{< relref "../../../reference-http-api/_index.md#ruler" >}}). ## State @@ -55,13 +55,13 @@ The ruler supports the following backends: - [Google Cloud Storage](https://cloud.google.com/storage/): `-ruler-storage.backend=gcs` - [Microsoft Azure Storage](https://azure.microsoft.com/en-us/services/storage/): `-ruler-storage.backend=azure` - [OpenStack Swift](https://wiki.openstack.org/wiki/Swift): `-ruler-storage.backend=swift` -- [Local storage]({{< relref "#local-storage" >}}): `-ruler-storage.backend=local` +- [Local storage]({{< relref "../#local-storage" >}}): `-ruler-storage.backend=local` ### Local storage The `local` storage backend reads [Prometheus recording rules](https://prometheus.io/docs/prometheus/latest/configuration/recording_rules/) from the local filesystem. -> **Note:** Local storage is a read-only backend that does not support the creation and deletion of rules through the [Configuration API]({{< relref "#http-configuration-api" >}}). +> **Note:** Local storage is a read-only backend that does not support the creation and deletion of rules through the [Configuration API]({{< relref "../#http-configuration-api" >}}). When all rulers have the same rule files, local storage supports ruler sharding. To facilitate sharding in Kubernetes, mount a [Kubernetes ConfigMap](https://kubernetes.io/docs/concepts/configuration/configmap/) into every ruler pod. diff --git a/docs/sources/operators-guide/images/ruler.svg b/docs/sources/operators-guide/architecture/components/ruler/ruler.svg similarity index 100% rename from docs/sources/operators-guide/images/ruler.svg rename to docs/sources/operators-guide/architecture/components/ruler/ruler.svg diff --git a/docs/sources/operators-guide/architecture/components/store-gateway.md b/docs/sources/operators-guide/architecture/components/store-gateway.md index 588aab31617..44b05909975 100644 --- a/docs/sources/operators-guide/architecture/components/store-gateway.md +++ b/docs/sources/operators-guide/architecture/components/store-gateway.md @@ -8,7 +8,7 @@ weight: 70 # Grafana Mimir store-gateway The store-gateway component, which is stateful, queries blocks from [long-term storage]({{< relref "./_index.md#long-term-storage" >}}). -On the read path, the [querier]({{< relref "./querier.md" >}}) and the [ruler]({{< relref "./ruler.md">}}) use the store-gateway when handling the query, whether the query comes from a user or from when a rule is being evaluated. +On the read path, the [querier]({{< relref "./querier.md" >}}) and the [ruler]({{< relref "./ruler/index.md">}}) use the store-gateway when handling the query, whether the query comes from a user or from when a rule is being evaluated. To find the right blocks to look up at query time, the store-gateway requires an almost up-to-date view of the bucket in long-term storage. The store-gateway keeps the bucket view updated using one of the following options: diff --git a/docs/sources/operators-guide/architecture/deployment-modes.md b/docs/sources/operators-guide/architecture/deployment-modes/index.md similarity index 88% rename from docs/sources/operators-guide/architecture/deployment-modes.md rename to docs/sources/operators-guide/architecture/deployment-modes/index.md index 90e1966ba99..6ecf07df345 100644 --- a/docs/sources/operators-guide/architecture/deployment-modes.md +++ b/docs/sources/operators-guide/architecture/deployment-modes/index.md @@ -24,22 +24,22 @@ The monolithic mode runs all required components in a single process and is the [//]: # "Diagram source at https://docs.google.com/presentation/d/1LemaTVqa4Lf_tpql060vVoDGXrthp-Pie_SQL7qwHjc/edit#slide=id.g11694eaa76e_0_0" -![Mimir's monolithic mode](../../images/monolithic-mode.svg) +![Mimir's monolithic mode](monolithic-mode.svg) Monolithic mode can be horizontally scaled out by deploying multiple Grafana Mimir binaries with `-target=all`. This approach provides high-availability and increased scale without the configuration complexity of the full [microservices deployment](#microservices-mode). [//]: # "Diagram source at https://docs.google.com/presentation/d/1LemaTVqa4Lf_tpql060vVoDGXrthp-Pie_SQL7qwHjc/edit#slide=id.g11658e7e4c6_1_20" -![Mimir's horizontally scaled monolithic mode](../../images/scaled-monolithic-mode.svg) +![Mimir's horizontally scaled monolithic mode](scaled-monolithic-mode.svg) ## Microservices mode In microservices mode, components are deployed in distinct processes. Scaling is per component, which allows for greater flexibility in scaling and more granular failure domains. Microservices mode is the preferred method for a production deployment, but it is also the most complex. -In microservices mode, each Grafana Mimir process is invoked with its `-target` parameter set to a specific Grafana Mimir component (for example, `-target=ingester` or `-target=distributor`). To get a working Grafana Mimir instance, you must deploy every required component. For more information about each of the Grafana Mimir components, refer to [Architecture]({{}}). +In microservices mode, each Grafana Mimir process is invoked with its `-target` parameter set to a specific Grafana Mimir component (for example, `-target=ingester` or `-target=distributor`). To get a working Grafana Mimir instance, you must deploy every required component. For more information about each of the Grafana Mimir components, refer to [Architecture]({{}}). If you are interested in deploying Grafana Mimir in microservices mode, we recommend that you use [Kubernetes](https://kubernetes.io/). [//]: # "Diagram source at https://docs.google.com/presentation/d/1LemaTVqa4Lf_tpql060vVoDGXrthp-Pie_SQL7qwHjc/edit#slide=id.g11658e7e4c6_1_53" -![Mimir's microservices mode](../../images/microservices-mode.svg) +![Mimir's microservices mode](microservices-mode.svg) diff --git a/docs/sources/operators-guide/images/microservices-mode.svg b/docs/sources/operators-guide/architecture/deployment-modes/microservices-mode.svg similarity index 100% rename from docs/sources/operators-guide/images/microservices-mode.svg rename to docs/sources/operators-guide/architecture/deployment-modes/microservices-mode.svg diff --git a/docs/sources/operators-guide/images/monolithic-mode.svg b/docs/sources/operators-guide/architecture/deployment-modes/monolithic-mode.svg similarity index 100% rename from docs/sources/operators-guide/images/monolithic-mode.svg rename to docs/sources/operators-guide/architecture/deployment-modes/monolithic-mode.svg diff --git a/docs/sources/operators-guide/images/scaled-monolithic-mode.svg b/docs/sources/operators-guide/architecture/deployment-modes/scaled-monolithic-mode.svg similarity index 100% rename from docs/sources/operators-guide/images/scaled-monolithic-mode.svg rename to docs/sources/operators-guide/architecture/deployment-modes/scaled-monolithic-mode.svg diff --git a/docs/sources/operators-guide/architecture/hash-ring/index.md b/docs/sources/operators-guide/architecture/hash-ring/index.md index 10e9bc75e5e..5aa3600bce7 100644 --- a/docs/sources/operators-guide/architecture/hash-ring/index.md +++ b/docs/sources/operators-guide/architecture/hash-ring/index.md @@ -76,7 +76,7 @@ Each of the following components builds an independent hash ring: - [Distributors]({{< relref "../components/distributor.md" >}}) enforce rate limits. - [Compactors]({{< relref "../components/compactor/index.md" >}}) shard compaction workload. - [Store-gateways]({{< relref "../components/store-gateway.md" >}}) shard blocks to query from long-term storage. -- [(Optional) Rulers]({{< relref "../components/ruler.md" >}}) shard rule groups to evaluate. +- [(Optional) Rulers]({{< relref "../components/ruler/index.md" >}}) shard rule groups to evaluate. - [(Optional) Alertmanagers]({{< relref "../components/alertmanager.md" >}}) shard tenants. ## How the hash ring is shared between Grafana Mimir instances diff --git a/docs/sources/operators-guide/deploying-grafana-mimir/_index.md b/docs/sources/operators-guide/deploying-grafana-mimir/_index.md index 44db4ec1a64..93bc327f038 100644 --- a/docs/sources/operators-guide/deploying-grafana-mimir/_index.md +++ b/docs/sources/operators-guide/deploying-grafana-mimir/_index.md @@ -14,10 +14,10 @@ You can use Helm or Tanka to deploy Grafana Mimir on Kubernetes. ## Helm -A [mimir-distributed](https://github.com/grafana/helm-charts/tree/main/charts/mimir-distributed) Helm chart that deploys Grafana Mimir in [microservices mode]({{< relref "../architecture/deployment-modes.md#microservices-mode" >}}) is available in the grafana/helm-charts repo. +A [mimir-distributed](https://github.com/grafana/helm-charts/tree/main/charts/mimir-distributed) Helm chart that deploys Grafana Mimir in [microservices mode]({{< relref "../architecture/deployment-modes/index.md#microservices-mode" >}}) is available in the grafana/helm-charts repo. ## Tanka -Grafana Labs also publishes [jsonnet](https://jsonnet.org/) files that you can use to deploy Grafana Mimir in [microservices mode]({{< relref "../architecture/deployment-modes.md#microservices-mode" >}}). To locate the Jsonnet files and a README file, refer to [Jsonnet for Mimir on Kubernetes](https://github.com/grafana/mimir/tree/main/operations/mimir). +Grafana Labs also publishes [jsonnet](https://jsonnet.org/) files that you can use to deploy Grafana Mimir in [microservices mode]({{< relref "../architecture/deployment-modes/index.md#microservices-mode" >}}). To locate the Jsonnet files and a README file, refer to [Jsonnet for Mimir on Kubernetes](https://github.com/grafana/mimir/tree/main/operations/mimir). The README explains how to use [tanka](https://tanka.dev/) and [jsonnet-bundler](https://github.com/jsonnet-bundler/jsonnet-bundler) to generate Kubernetes YAML manifests from the jsonnet files. Alternatively, if you are familiar with tanka, you can use it directly to deploy Grafana Mimir. diff --git a/docs/sources/operators-guide/getting-started/_index.md b/docs/sources/operators-guide/getting-started/_index.md index 8c203f2c2f3..b9d8a4bc1d9 100644 --- a/docs/sources/operators-guide/getting-started/_index.md +++ b/docs/sources/operators-guide/getting-started/_index.md @@ -7,7 +7,7 @@ weight: 10 # Getting started with Grafana Mimir -These instructions focus on deploying Grafana Mimir as a [monolith]({{< relref "../architecture/deployment-modes.md#monolithic-mode" >}}), which is designed for users getting started with the project. For more information about the different ways to deploy Grafana Mimir, refer to [Deployment Modes]({{< relref "../architecture/deployment-modes.md" >}}). +These instructions focus on deploying Grafana Mimir as a [monolith]({{< relref "../architecture/deployment-modes/index.md#monolithic-mode" >}}), which is designed for users getting started with the project. For more information about the different ways to deploy Grafana Mimir, refer to [Deployment Modes]({{< relref "../architecture/deployment-modes/index.md" >}}). ## Before you begin diff --git a/docs/sources/operators-guide/reference-glossary.md b/docs/sources/operators-guide/reference-glossary.md index 5d98e6c5eb7..e53998c7b47 100644 --- a/docs/sources/operators-guide/reference-glossary.md +++ b/docs/sources/operators-guide/reference-glossary.md @@ -11,7 +11,7 @@ weight: 100 Blocks storage is the Mimir storage engine based on the Prometheus TSDB. Grafana Mimir stores blocks in object stores such as AWS S3, Google Cloud Storage (GCS), Azure blob storage, or OpenStack Object Storage (Swift). -For a complete list of supported backends, refer to [About the architecture]({{< relref "architecture/about-grafana-mimir-architecture.md" >}}) +For a complete list of supported backends, refer to [About the architecture]({{< relref "architecture/about-grafana-mimir-architecture/index.md" >}}) ## Chunk @@ -37,7 +37,7 @@ For component specific documentation, refer to one of the following topics: - [Query-scheduler]({{< relref "architecture/components/query-scheduler/index.md" >}}) - [Store-gateway]({{< relref "architecture/components/store-gateway.md" >}}) - [Optional: Alertmanager]({{< relref "architecture/components/alertmanager.md" >}}) -- [Optional: Ruler]({{< relref "architecture/components/ruler.md" >}}) +- [Optional: Ruler]({{< relref "architecture/components/ruler/index.md" >}}) ## Flushing diff --git a/docs/sources/operators-guide/running-production-environment/planning-capacity.md b/docs/sources/operators-guide/running-production-environment/planning-capacity.md index c5ffc9c0b9e..a459e19fc59 100644 --- a/docs/sources/operators-guide/running-production-environment/planning-capacity.md +++ b/docs/sources/operators-guide/running-production-environment/planning-capacity.md @@ -125,7 +125,7 @@ Estimated required CPU, memory, and disk space: ### (Optional) Ruler -The [ruler]({{< relref "../architecture/components/ruler.md" >}}) component resources utilization is determined by the number of rules evaluated per second. +The [ruler]({{< relref "../architecture/components/ruler/index.md" >}}) component resources utilization is determined by the number of rules evaluated per second. The rules evaluation is computationally equal to queries execution, so the querier resources recommendations apply to ruler too. ### Compactor diff --git a/docs/sources/operators-guide/images/avoid-querying-non-compacted-blocks.png b/docs/sources/operators-guide/running-production-environment/production-tips/avoid-querying-non-compacted-blocks.png similarity index 100% rename from docs/sources/operators-guide/images/avoid-querying-non-compacted-blocks.png rename to docs/sources/operators-guide/running-production-environment/production-tips/avoid-querying-non-compacted-blocks.png diff --git a/docs/sources/operators-guide/running-production-environment/production-tips.md b/docs/sources/operators-guide/running-production-environment/production-tips/index.md similarity index 86% rename from docs/sources/operators-guide/running-production-environment/production-tips.md rename to docs/sources/operators-guide/running-production-environment/production-tips/index.md index 75d0605438e..77b99dff363 100644 --- a/docs/sources/operators-guide/running-production-environment/production-tips.md +++ b/docs/sources/operators-guide/running-production-environment/production-tips/index.md @@ -20,7 +20,7 @@ The total number of file descriptors, used to load TSDB files, linearly increase We recommend fine-tuning the following settings to avoid reaching the maximum number of open file descriptors: 1. Configure the system's `file-max` ulimit to at least `65536`. Increase the limit to `1048576` when running a Grafana Mimir cluster with more than a thousand tenants. -1. Enable ingesters [shuffle sharding](../guides/shuffle-sharding.md) to reduce the number of tenants per ingester. +1. Enable ingesters [shuffle sharding]({{< relref "../../configuring/configuring-shuffle-sharding.md" >}}) to reduce the number of tenants per ingester. ### Ingester disk space @@ -28,7 +28,7 @@ The ingester writes received samples to a write-ahead log (WAL) and by default, Both the WAL and blocks are temporarily stored on the local disk. The required disk space depends on the number of time series stored in the ingester and the configured `-blocks-storage.tsdb.retention-period`. -For more information about estimating the required ingester disk space, refer to [Planning capacity]({{< relref "planning-capacity.md#ingester" >}}). +For more information about estimating the required ingester disk space, refer to [Planning capacity]({{< relref "../planning-capacity.md#ingester" >}}). ## Querier @@ -37,7 +37,7 @@ For more information about estimating the required ingester disk space, refer to The querier supports caching to reduce the number API calls to the long-term storage. We recommend enabling caching in the querier. -For more information about configuring the cache, refer to [querier]({{< relref "../architecture/components/querier.md" >}}). +For more information about configuring the cache, refer to [querier]({{< relref "../../architecture/components/querier.md" >}}). ### Avoid querying non-compacted blocks @@ -48,7 +48,7 @@ When running Grafana Mimir at scale, querying non-compacted blocks might be inef Configure Grafana Mimir to ensure only compacted blocks are queried: -1. Configure compactor's `-compactor.split-and-merge-shards` and `-compactor.split-groups` for every tenant with more than 20 million active series. For more information about configuring the compactor's split and merge shards, refer to [compactor]({{< relref "../architecture/components/compactor/index.md" >}}). +1. Configure compactor's `-compactor.split-and-merge-shards` and `-compactor.split-groups` for every tenant with more than 20 million active series. For more information about configuring the compactor's split and merge shards, refer to [compactor]({{< relref "../../architecture/components/compactor/index.md" >}}). 1. Configure querier's `-querier.query-store-after` equal to `-querier.query-ingesters-within` minus five minutes. The five-minute delta is recommended to ensure the time range on the boundary is queried both from ingesters and queriers. #### How to estimate `-querier.query-store-after` @@ -61,9 +61,9 @@ The following diagram shows all of the timings involved in the estimation. This - The compactor takes up to three hours to compact two-hour blocks shipped from all ingesters - Querier and store-gateways take up to 15 minutes to discover and load a new compacted block -Based on these assumptions, in the worst-case scenario, it takes up to six hours and 45 minutes from when a sample is ingested until that sample has been appended to a block flushed to the storage and the block is [vertically compacted](./compactor/index.md) with all other overlapping two-hour blocks shipped from ingesters. +Based on these assumptions, in the worst-case scenario, it takes up to six hours and 45 minutes from when a sample is ingested until that sample has been appended to a block flushed to the storage and the block is [vertically compacted]({{< relref "../../architecture/components/compactor/index.md" >}}) with all other overlapping two-hour blocks shipped from ingesters. -![Avoid querying non compacted blocks](../../images/avoid-querying-non-compacted-blocks.png) +![Avoid querying non compacted blocks](avoid-querying-non-compacted-blocks.png) [//]: # "Diagram source at https://docs.google.com/presentation/d/1bHp8_zcoWCYoNU2AhO2lSagQyuIrghkCncViSqn14cU/edit" @@ -74,7 +74,7 @@ Based on these assumptions, in the worst-case scenario, it takes up to six hours The store-gateway supports caching that reduces the number of API calls to the long-term storage and improves query performance. We recommend enabling caching in the store-gateway. -For more information about configuring the cache, refer to [store-gateway]({{< relref "../architecture/components/store-gateway.md" >}}). +For more information about configuring the cache, refer to [store-gateway]({{< relref "../../architecture/components/store-gateway.md" >}}). ### Ensure a high number of maximum open file descriptors @@ -89,7 +89,7 @@ We recommend configuring the system's `file-max` ulimit at least to `65536` to a ### Ensure the compactor has enough disk space The compactor requires a lot of disk space to download source blocks from the long-term storage and temporarily store the compacted block before uploading it to the storage. -For more information about required disk space, refer to [Compactor disk utilization](../architecture/components/compactor/index.md#compactor-disk-utilization). +For more information about required disk space, refer to [Compactor disk utilization]({{< relref "../../architecture/components/compactor/index.md#compactor-disk-utilization" >}}). ## Caching @@ -105,4 +105,4 @@ Running a dedicated Memcached cluster for each cache type is not required, but r ## Security We recommend securing the Grafana Mimir cluster. -For more information about securing a Mimir cluster, refer to [Securing Grafana Mimir]({{< relref "../securing/_index.md" >}}). +For more information about securing a Mimir cluster, refer to [Securing Grafana Mimir]({{< relref "../../securing/_index.md" >}}).