diff --git a/docs/sources/operators-guide/configuring/about-dns-service-discovery.md b/docs/sources/operators-guide/configuring/about-dns-service-discovery.md index 5b3a2ac0aa8..64e0bbf5ca4 100644 --- a/docs/sources/operators-guide/configuring/about-dns-service-discovery.md +++ b/docs/sources/operators-guide/configuring/about-dns-service-discovery.md @@ -7,7 +7,7 @@ weight: 20 # About Grafana Mimir DNS service discovery -Some clients in Grafana Mimir support service discovery via DNS to find the addresses of backend servers to connect to. These clients support service discovery via DNS: +Some clients in Grafana Mimir support service discovery via DNS to locate the addresses of backend servers to connect to. The following clients support service discovery via DNS: - [Block storage's memcached cache]({{< relref "reference-configuration-parameters/#blocks_storage" >}}) - [All caching memcached servers]({{< relref "reference-configuration-parameters/#memcached" >}}) @@ -15,7 +15,8 @@ Some clients in Grafana Mimir support service discovery via DNS to find the addr ## Supported discovery modes -The DNS service discovery supports different discovery modes. A discovery mode is selected adding a specific prefix to the address. Supported prefixes are: +DNS service discovery supports different discovery modes. +You select a discovery mode by adding one of the following supported prefixes to the address: - **`dns+`**
The domain name after the prefix is looked up as an A/AAAA query. For example: `dns+memcached.local:11211`. @@ -24,4 +25,4 @@ The DNS service discovery supports different discovery modes. A discovery mode i - **`dnssrvnoa+`**
The domain name after the prefix is looked up as a SRV query, with no A/AAAA lookup made after that. For example: `dnssrvnoa+_memcached._tcp.memcached.namespace.svc.cluster.local`. -If no prefix is provided, the provided IP or hostname will be used without pre-resolving it. +If no prefix is provided, the provided IP or hostname is used without pre-resolving it. diff --git a/docs/sources/operators-guide/configuring/about-ip-address-logging.md b/docs/sources/operators-guide/configuring/about-ip-address-logging.md index 77acd4e2afd..32a7ba5340b 100644 --- a/docs/sources/operators-guide/configuring/about-ip-address-logging.md +++ b/docs/sources/operators-guide/configuring/about-ip-address-logging.md @@ -7,16 +7,17 @@ weight: 30 # About Grafana Mimir IP address logging of a reverse proxy -If a reverse proxy is used in front of Mimir, it may be difficult to troubleshoot errors. The following settings can be used to log the IP address passed along by the reverse proxy in headers such as `X-Forwarded-For`. +If a reverse proxy is used in front of Mimir, it might be difficult to troubleshoot errors. +You can use the following settings to log the IP address passed along by the reverse proxy in headers such as `X-Forwarded-For`. - `-server.log-source-ips-enabled` - Set this to true to add IP address logging when a `Forwarded`, `X-Real-IP` or `X-Forwarded-For` header is used. A field called `sourceIPs` will be added to error logs when data is pushed into Grafana Mimir. + Set this to `true` to add IP address logging when a `Forwarded`, `X-Real-IP`, or `X-Forwarded-For` header is used. A field called `sourceIPs` is added to error logs when data is pushed into Grafana Mimir. - `-server.log-source-ips-header` - Header field storing the source IP addresses. It is only used if `-server.log-source-ips-enabled` is true, and if `-server.log-source-ips-regex` is set. If not set, the default `Forwarded`, `X-Real-IP` or `X-Forwarded-For` headers are searched. + The header field stores the source IP addresses and is used only if `-server.log-source-ips-enabled` is `true`, and if `-server.log-source-ips-regex` is set. If you do not set these flags, the default `Forwarded`, `X-Real-IP`, or `X-Forwarded-For` headers are searched. - `-server.log-source-ips-regex` - Regular expression for matching the source IPs. It should contain at least one capturing group the first of which will be returned. Only used if `-server.log-source-ips-enabled` is true and if `-server.log-source-ips-header` is set. + A regular expression that is used to match the source IPs. The regular expression must contain at least one capturing group, the first of which is returned. This flag is used only if `-server.log-source-ips-enabled` is `true` and if `-server.log-source-ips-header` is set. diff --git a/docs/sources/operators-guide/configuring/about-runtime-configuration.md b/docs/sources/operators-guide/configuring/about-runtime-configuration.md index 07388711fef..427f3a8df29 100644 --- a/docs/sources/operators-guide/configuring/about-runtime-configuration.md +++ b/docs/sources/operators-guide/configuring/about-runtime-configuration.md @@ -7,9 +7,11 @@ weight: 40 # About Grafana Mimir runtime configuration -A runtime configuration file is a file containing configuration, which is periodically reloaded while Mimir is running. It allows you to change a subset of Grafana Mimir’s configuration without having to restart the Grafana Mimir component or instance. +A runtime configuration file is a file that contains configuration parameters, which is periodically reloaded while Mimir is running. +It allows you to change a subset of Grafana Mimir’s configuration without having to restart the Grafana Mimir component or instance. -Runtime configuration is available for a subset of the configuration that was set at startup. A Grafana Mimir operator can observe the configuration and use runtime configuration to make immediate adjustments to Grafana Mimir. +Runtime configuration is available for a subset of the configuration that was set at startup. +A Grafana Mimir operator can observe the configuration and use runtime configuration to make immediate adjustments to Grafana Mimir. Runtime configuration values take precedence over command-line options. @@ -27,11 +29,11 @@ Use Grafana Mimir’s `/runtime_config` endpoint to see the current value of the ## Runtime configuration of per-tenant limits -The primary use case for the runtime configuration file is that it allows you to set and adjust limits for each tenant in Grafana Mimir. Doing so lets you set limits that are appropriate for each tenant based on their ingest and query needs. +The runtime configuration file is primarily used to set and adjust limits that are appropriate for each tenant based on their ingest and query needs. -The values that are defined in the limits section of your YAML configuration define the default set of limits that are applied to tenants. For example, if you set the `ingestion_rate` to 25,000 in your YAML configuration file, any tenant in your cluster that is sending more than 25,000 samples per second (SPS) will be rate limited. +The values that are defined in the limits section of your YAML configuration define the default set of limits that are applied to tenants. For example, if you set the `ingestion_rate` to `25,000` in your YAML configuration file, any tenant in your cluster that sends more than 25,000 samples per second (SPS) is rate limited. -You can use the runtime configuration file to override this behavior. For example, if you have a tenant (`tenant1`) that needs to send twice as many data points as the current limit, and you have another tenant (`tenant2`) that needs to send three times as many data points, you can modify the contents of your runtime configuration file: +You can use the runtime configuration file to override this behavior. For example, if you have a tenant (`tenant1`) that needs to send twice as many data points as the current limit, and you have another tenant (`tenant2`) that needs to send three times as many data points, you can modify the contents of your runtime configuration file as follows: ```yaml overrides: @@ -49,13 +51,13 @@ As a result, Grafana Mimir allows `tenant1` to send 50,000 SPS, and `tenant2` to ## Ingester instance limits -Grafana Mimir ingesters support limits that are applied per instance, meaning that they apply to each ingester process. These limits can be used to ensure individual ingesters are not overwhelmed regardless of any per-tenant limits. These limits can be set under the `ingester.instance_limits` block in the global configuration file, with CLI flags, or under the `ingester_limits` field in the runtime configuration file. - -The runtime configuration file can be used to dynamically adjust ingester instance limits. While per-tenant limits are limits applied to each tenant, per-ingester-instance limits are limits applied to each ingester process. +The runtime configuration file can be used to dynamically adjust Grafana Mimir ingester instance limits. While per-tenant limits are limits applied to each tenant, per-ingester-instance limits are limits applied to each ingester process. +Ingester limits ensure individual ingesters are not overwhelmed, regardless of any per-tenant limits. These limits can be set under the `ingester.instance_limits` block in the global configuration file, with CLI flags, or under the `ingester_limits` field in the runtime configuration file. The runtime configuration allows you to override initial values, which is useful for advanced operators who need to dynamically change them in response to changes in ingest or query load. -Everything under the `instance_limits` section within the [`ingester`]({{< relref "reference-configuration-parameters/#ingester" >}}) block can be overridden via runtime configuration. Here is an example portion of runtime configuration that changes the ingester limits: +Everything under the `instance_limits` section within the [`ingester`]({{< relref "reference-configuration-parameters/#ingester" >}}) block can be overridden via runtime configuration. +The following example shows a portion of the runtime configuration that changes the ingester limits: ```yaml ingester_limits: @@ -67,11 +69,9 @@ ingester_limits: ## Runtime configuration of ingester streaming -An advanced runtime configuration -controls whether ingesters transfer encoded chunks (the default) or transfer decoded series to queriers at query time. +An advanced runtime configuration option controls if ingesters transfer encoded chunks (the default) or transfer decoded series to queriers at query time. -The parameter `ingester_stream_chunks_when_using_blocks` may only be used in runtime configuration. -A value of true transfers encoded chunks, -and a value of false transfers decoded series. +The parameter `ingester_stream_chunks_when_using_blocks` might only be used in runtime configuration. +A value of `true` transfers encoded chunks, and a value of `false` transfers decoded series. -We strongly recommend against changing the default setting. It already defaults to true, and should remain true except for rare corner cases where users have observed slowdowns in Grafana Mimir rules evaluation. +> **Note:** We strongly recommend that you use the default setting, which is `true`, except in rare cases where users observe Grafana Mimir rules evaluation slowing down. diff --git a/docs/sources/operators-guide/configuring/about-tenant-ids.md b/docs/sources/operators-guide/configuring/about-tenant-ids.md index 1b84b9824b3..d530f0cec6d 100644 --- a/docs/sources/operators-guide/configuring/about-tenant-ids.md +++ b/docs/sources/operators-guide/configuring/about-tenant-ids.md @@ -12,7 +12,7 @@ For information about how Grafana Mimir components use tenant IDs, refer to [Aut ## Restrictions -Tenant IDs must be less-than or equal-to 150 bytes or characters in length and must comprise only supported characters: +Tenant IDs must be less-than or equal-to 150 bytes or characters in length and can only include the following supported characters: - Alphanumeric characters - `0-9` diff --git a/docs/sources/operators-guide/configuring/configuring-hash-rings.md b/docs/sources/operators-guide/configuring/configuring-hash-rings.md index dd08c562aac..b53903ab4e2 100644 --- a/docs/sources/operators-guide/configuring/configuring-hash-rings.md +++ b/docs/sources/operators-guide/configuring/configuring-hash-rings.md @@ -9,8 +9,7 @@ weight: 60 [Hash rings]({{< relref "../architecture/hash-ring.md" >}}) are a distributed consistent hashing scheme and are widely used by Grafana Mimir for sharding and replication. -There are several Grafana Mimir components that require a hash ring. -Each of the following components builds an independent hash ring. +Each of the following Grafana Mimir components builds an independent hash ring. The CLI flags used to configure the hash ring of each component have the following prefixes: - Ingesters: `-ingester.ring.*` @@ -40,10 +39,10 @@ You can configure the KV store backend setting the `.store` CLI flag (fo ### Memberlist (default) -By default, Grafana Mimir uses `memberlist` KV store backend. +By default, Grafana Mimir uses `memberlist` as the KV store backend. -At startup, a Mimir instance connects to other Mimir replicas to join the cluster. -A Mimir instance discovers the other replicas to join resolving the addresses configured in `-memberlist.join`. +At startup, a Grafana Mimir instance connects to other Mimir replicas to join the cluster. +A Grafana Mimir instance discovers the other replicas to join by resolving the addresses configured in `-memberlist.join`. The `-memberlist.join` CLI flag must resolve to other replicas in the cluster and can be specified multiple times. The `-memberlist.join` can be set to: @@ -56,9 +55,9 @@ The default port is `7946`. > **Note**: At a minimum, configure one or more addresses that resolve to a consistent subset of replicas (for example, all the ingesters). -> **Note**: If you're running Grafana Mimir in Kubernetes, we recommend defining a [headless Kubernetes Service](https://kubernetes.io/docs/concepts/services-networking/service/#headless-services) which resolves to the IP addresses of all Grafana Mimir pods. Then you set `-memberlist.join` to `dnssrv+..svc.cluster.local:`. +> **Note**: If you're running Grafana Mimir in Kubernetes, define a [headless Kubernetes Service](https://kubernetes.io/docs/concepts/services-networking/service/#headless-services) which resolves to the IP addresses of all Grafana Mimir pods. Then you set `-memberlist.join` to `dnssrv+..svc.cluster.local:`. -> **Note**: the `memberlist` backend is configured globally and can't be customized on a per-component basis. In these regards, the `memberlist` backend differs from others supported backends, like Consul or etcd. +> **Note**: The `memberlist` backend is configured globally and can't be customized on a per-component basis. Because `memberlist` is configured globally, the `memberlist` backend differs from other supported backends, such as Consul or etcd. Grafana Mimir supports TLS for memberlist connections between its components. For more information about TLS configuration, refer to [secure communications with TLS]({{< relref "../securing/securing-communications-with-tls.md" >}}). @@ -71,13 +70,13 @@ By default, Grafana Mimir memberlist protocol listens on address `0.0.0.0` and p If you run multiple Mimir processes on the same node or the port `7946` is not available, you can change the bind and advertise port by setting the following parameters: - `-memberlist.bind-addr`: IP address to listen on the local machine. -- `-memberlist.bind-port`: port to listen on the local machine. +- `-memberlist.bind-port`: Port to listen on the local machine. - `-memberlist.advertise-addr`: IP address to advertise to other Mimir replicas. The other replicas will connect to this IP to talk to the instance. -- `-memberlist.advertise-port`: port to advertise to other Mimir replicas. The other replicas will connect to this port to talk to the instance. +- `-memberlist.advertise-port`: Port to advertise to other Mimir replicas. The other replicas will connect to this port to talk to the instance. ### Fine tuning memberlist changes propagation latency -The `cortex_ring_oldest_member_timestamp` metric can be used to measure the hash ring changes propagation. +The `cortex_ring_oldest_member_timestamp` metric can be used to measure the propagation of hash ring changes. This metric tracks the oldest heartbeat timestamp across all instances in the ring. You can execute the following query to measure the age of the oldest heartbeat timestamp in the ring: @@ -85,8 +84,8 @@ You can execute the following query to measure the age of the oldest heartbeat t max(time() - cortex_ring_oldest_member_timestamp{state="ACTIVE"}) ``` -The measured age shouldn't be higher than the configured `.heartbeat-period` plus a reasonable delta (15 seconds). -If you experience an higher changes propagation latency, you can fine tune the following settings: +The measured age shouldn't be higher than the configured `.heartbeat-period` plus a reasonable delta (for example, 15 seconds). +If you experience a higher changes propagation latency, you can adjust the following settings: - Decrease `-memberlist.gossip-interval` - Increase `-memberlist.gossip-nodes` @@ -106,7 +105,7 @@ To see all supported configuration parameters, refer [consul]({{< relref "../con To use [etcd](https://etcd.io) as a backend KV store, set the following parameters: -- `.etcd.endpoints`: etcd hostname and port separated by colon. For example, `etcd:2379`. +- `.etcd.endpoints`: etcd hostname and port separated by a colon. For example, `etcd:2379`. - `.etcd.username`: Username used to authenticate to etcd. If etcd authentication is disabled, you can leave the username empty. - `.etcd.password`: Password used to authenticate to etcd. If etcd authentication is disabled, you can leave the password empty. @@ -126,17 +125,17 @@ The primary and secondary backends can be swapped in real-time, which enables yo You can use the following parameters to configure the multi KV store settings: -- `.multi.primary`: the type of primary backend store. -- `.multi.secondary`: the type of the secondary backend store. -- `.multi.mirror-enabled`: whether mirroring of writes to the secondary backend store is enabled. -- `.multi.mirror-timeout`: the maximum time allowed to mirror a change to the secondary backend store. +- `.multi.primary`: The type of primary backend store. +- `.multi.secondary`: The type of the secondary backend store. +- `.multi.mirror-enabled`: Whether mirroring of writes to the secondary backend store is enabled. +- `.multi.mirror-timeout`: The maximum time allowed to mirror a change to the secondary backend store. -> **Note**: Grafana Mimir does not log an error if unable to mirror writes to the secondary backend store. The total number of errors is only tracked through the metric `cortex_multikv_mirror_write_errors_total`. +> **Note**: Grafana Mimir does not log an error if it is unable to mirror writes to the secondary backend store. The total number of errors is only tracked through the metric `cortex_multikv_mirror_write_errors_total`. The multi KV primary backend and mirroring can also be configured in the [runtime configuration file]({{< relref "../configuring/about-runtime-configuration.md" >}}). -Changes to a multi KV Store in the runtime configuration apply to ALL components using a multi KV store. +Changes to a multi KV Store in the runtime configuration apply to _all_ components using a multi KV store. -Example runtime configuration file for the multi KV store: +The following example shows a runtime configuration file for the multi KV store: ```yaml multi_kv_config: @@ -145,17 +144,17 @@ multi_kv_config: mirror_enabled: true ``` -> **Note**: the runtime configuration settings take precedence over CLI flags. +> **Note**: The runtime configuration settings take precedence over CLI flags. -#### A practical example +#### Ingester migration example -For example, you can migrate ingesters from Consul to etcd using the following procedure: +The following steps show how to migrate ingesters from Consul to etcd: -1. Configure `-ingester.ring.store=multi`, `-ingester.ring.multi.primary=consul`, `-ingester.ring.multi.secondary=etcd` and `-ingester.ring.multi.mirror-enabled=true`. Configure both Consul settings `-ingester.ring.consul.*` and etcd settings `-ingester.ring.etcd.*`. -1. Apply changes to your Grafana Mimir cluster. After changes have rolled out, Mimir will use Consul as primary KV store, and all writes will be mirrored to etcd too. +1. Configure `-ingester.ring.store=multi`, `-ingester.ring.multi.primary=consul`, `-ingester.ring.multi.secondary=etcd`, and `-ingester.ring.multi.mirror-enabled=true`. Configure both Consul settings `-ingester.ring.consul.*` and etcd settings `-ingester.ring.etcd.*`. +1. Apply changes to your Grafana Mimir cluster. After changes have rolled out, Grafana Mimir uses Consul as primary KV store, and all writes are mirrored to etcd too. 1. Configure `primary: etcd` in the `multi_kv_config` block of the [runtime configuration file]({{< relref "../configuring/about-runtime-configuration.md" >}}). Changes in the runtime configuration file are reloaded live, without the need to restart the process. 1. Wait until all Mimir instances have reloaded the updated configuration. 1. Configure `mirror_enabled: false` in the `multi_kv_config` block of the [runtime configuration file]({{< relref "../configuring/about-runtime-configuration.md" >}}). 1. Wait until all Mimir instances have reloaded the updated configuration. -1. Configure `-ingester.ring.store=etcd` and remove both the multi and Consul configuration because they will not be required anymore. -1. Apply changes to your Grafana Mimir cluster. After changes have rolled out, Mimir will only use etcd. +1. Configure `-ingester.ring.store=etcd` and remove both the multi and Consul configuration because they are no longer required. +1. Apply changes to your Grafana Mimir cluster. After changes have rolled out, Grafana Mimir only uses etcd. diff --git a/docs/sources/operators-guide/configuring/configuring-high-availability-deduplication.md b/docs/sources/operators-guide/configuring/configuring-high-availability-deduplication.md index f61b7152477..b887ad8edda 100644 --- a/docs/sources/operators-guide/configuring/configuring-high-availability-deduplication.md +++ b/docs/sources/operators-guide/configuring/configuring-high-availability-deduplication.md @@ -23,7 +23,7 @@ timeout ensures that too much data is not dropped before failover to the other r > **Note:** In a scenario where the default scrape period is 15 seconds, and the timeouts in Grafana Mimir are set to the default values, > when a leader-election failover occurs, you'll likely only lose a single scrape of data. For any query using the `rate()` function, make the rate time interval > at least four times that of the scrape period to account for any of these failover scenarios. -> For example with the default scrape period of 15 seconds, use a rate time-interval at least 1-minute long. +> For example, with the default scrape period of 15 seconds, use a rate time-interval at least 1-minute. ## Distributor high-availability (HA) tracker @@ -32,20 +32,22 @@ The [distributor]({{< relref "../architecture/components/distributor.md" >}}) in The HA tracker deduplicates incoming samples based on a cluster and replica label expected on each incoming series. The cluster label uniquely identifies the cluster of redundant Prometheus servers for a given tenant. The replica label uniquely identifies the replica within the Prometheus cluster. -Incoming samples are considered duplicated (and thus dropped) if received from any replica which is not the currently elected as leader within a cluster. +Incoming samples are considered duplicated (and thus dropped) if they are received from any replica that is not the currently elected leader within a cluster. -In the event the HA tracker is enabled but incoming samples contain only one or none of the cluster and replica labels, these samples will be accepted by default and never deduplicated. +If the HA tracker is enabled but incoming samples contain only one or none of the cluster and replica labels, these samples are accepted by default and never deduplicated. -> Note: for performance reasons, the HA tracker only checks the cluster and replica label of the first series in the request to decide whether all series in the request should be deduplicated. This assumes that all series inside the request have the same cluster and replica labels, which is typically true when Prometheus is configured with external labels. We recommend you to ensure this requirement is honored if you're having a non standard Prometheus setup (eg. you're using Prometheus federation or have a metrics proxy in between). +> Note: for performance reasons, the HA tracker only checks the cluster and replica label of the first series in the request to determine whether all series in the request should be deduplicated. This assumes that all series inside the request have the same cluster and replica labels, which is typically true when Prometheus is configured with external labels. Ensure this requirement is honored if you have a non-standard Prometheus setup (for example, you're using Prometheus federation or have a metrics proxy in between). ## Configuration -This section includes information about how to configure Prometheus and how to configure Grafana Mimir. +This section includes information about how to configure Prometheus and Grafana Mimir. ### How to configure Prometheus -To configure Prometheus, set two identifiers for each Prometheus server: one for the cluster, for example, `team-1` or `team-2`, and one to identify the replica in the cluster, for example `a` or `b`. -It’s easiest to set [external labels](https://prometheus.io/docs/prometheus/latest/configuration/configuration/). The default labels are `cluster` and `__replica__`, for example: +To configure Prometheus, set two identifiers for each Prometheus server, one for the cluster. For example, set `team-1` or `team-2`, and one to identify the replica in the cluster, for example `a` or `b`. +The easiest approach is to set [external labels](https://prometheus.io/docs/prometheus/latest/configuration/configuration/). The default labels are `cluster` and `__replica__`. + +The following example shows how to set identifiers in Prometheus: ``` global: @@ -66,7 +68,7 @@ global: > **Note:** The preceding labels are external labels and have nothing to do with `remote_write` configuration. These two label names are configurable on a per-tenant basis within Grafana Mimir. For example, if the label name of one cluster is used by -some workloads, set the label name of another cluster to be something else that uniquely identifies the second cluster. +some workloads, set the label name of another cluster to something else that uniquely identifies the second cluster. Set the replica label so that the value for each Prometheus cluster is unique in that cluster. @@ -84,7 +86,7 @@ The minimal configuration required is as follows: To enable the HA tracker feature, set the `-distributor.ha-tracker.enable=true` CLI flag (or its YAML configuration option) in the distributor. -Next, decide whether you want to enable it for all tenants or just a subset of them. +Next, decide whether you want to enable it for all tenants or just a subset of tenants. To enable it for all tenants, set `-distributor.ha-tracker.enable-for-all-users=true`. Alternatively, you can enable the HA tracker only on a per-tenant basis, keeping the default `-distributor.ha-tracker.enable-for-all-users=false` and overriding it on a per-tenant basis setting `accept_ha_samples` in the overrides section of the runtime configuration. @@ -109,8 +111,8 @@ You can configure the name of these labels either globally or on a per-tenant ba Configure the default cluster and replica label names using the following CLI flags (or their respective YAML configuration options): -- `-distributor.ha-tracker.cluster`: name of the label whose value uniquely identifies a Prometheus HA cluster (defaults to `cluster`). -- `-distributor.ha-tracker.replica`: name of the label whose value uniquely identifies a Prometheus replica within the HA cluster (defaults to `__replica__`). +- `-distributor.ha-tracker.cluster`: Name of the label whose value uniquely identifies a Prometheus HA cluster (defaults to `cluster`). +- `-distributor.ha-tracker.replica`: Name of the label whose value uniquely identifies a Prometheus replica within the HA cluster (defaults to `__replica__`). > **Note:** The HA label names can be overridden on a per-tenant basis by setting `ha_cluster_label` and `ha_replica_label` in the overrides section of the runtime configuration. diff --git a/docs/sources/operators-guide/configuring/configuring-the-query-frontend-work-with-prometheus.md b/docs/sources/operators-guide/configuring/configuring-the-query-frontend-work-with-prometheus.md index bbb0272da4b..6668f5dea4e 100644 --- a/docs/sources/operators-guide/configuring/configuring-the-query-frontend-work-with-prometheus.md +++ b/docs/sources/operators-guide/configuring/configuring-the-query-frontend-work-with-prometheus.md @@ -7,9 +7,9 @@ weight: 90 # Configuring the Grafana Mimir query-frontend to work with Prometheus -You can use the Mimir query-frontend with any Prometheus-API compatible -service, including Prometheus and Thanos. Use this config file to get -the benefits of query parallelisation and caching. +You can use the Grafana Mimir query-frontend with any Prometheus-API compatible +service, including Prometheus and Thanos. Use this configuration file to +benefit from query parallelization and caching. [embedmd]:# (../../../configurations/prometheus-frontend.yml)