Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Publish runtime metric in seconds #5893

Merged
merged 7 commits into from
Aug 11, 2022
Merged
Show file tree
Hide file tree
Changes from 5 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
62 changes: 48 additions & 14 deletions src/v/redpanda/application.cc
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,6 @@
#include "redpanda/admin_server.h"
#include "resource_mgmt/io_priority.h"
#include "rpc/simple_protocol.h"
#include "ssx/metrics.h"
#include "storage/backlog_controller.h"
#include "storage/chunk_cache.h"
#include "storage/compaction_controller.h"
Expand Down Expand Up @@ -294,8 +293,11 @@ void application::initialize(
}

_scheduling_groups.create_groups().get();
_deferred.emplace_back(
[this] { _scheduling_groups.destroy_groups().get(); });
_scheduling_groups_probe.wire_up(_scheduling_groups);
_deferred.emplace_back([this] {
_scheduling_groups_probe.clear();
_scheduling_groups.destroy_groups().get();
});

if (proxy_cfg) {
_proxy_config.emplace(*proxy_cfg);
Expand All @@ -314,23 +316,55 @@ void application::initialize(
}

void application::setup_metrics() {
if (!config::shard_local_cfg().disable_public_metrics()) {
seastar::metrics::replicate_metric_families(
seastar::metrics::default_handle(),
{{"scheduler_runtime_ms", ssx::metrics::public_metrics_handle},
{"io_queue_total_read_ops", ssx::metrics::public_metrics_handle},
{"io_queue_total_write_ops", ssx::metrics::public_metrics_handle},
{"memory_allocated_memory", ssx::metrics::public_metrics_handle},
{"memory_free_memory", ssx::metrics::public_metrics_handle}})
.get();
}
setup_internal_metrics();
setup_public_metrics();
}

if (config::shard_local_cfg().disable_metrics()) {
void application::setup_public_metrics() {
namespace sm = ss::metrics;

if (config::shard_local_cfg().disable_public_metrics()) {
return;
}

seastar::metrics::replicate_metric_families(
seastar::metrics::default_handle(),
{{"io_queue_total_read_ops", ssx::metrics::public_metrics_handle},
{"io_queue_total_write_ops", ssx::metrics::public_metrics_handle},
{"memory_allocated_memory", ssx::metrics::public_metrics_handle},
{"memory_free_memory", ssx::metrics::public_metrics_handle}})
.get();

_public_metrics.add_group(
"application",
{
sm::make_gauge(
"uptime_seconds_total",
[] {
return std::chrono::duration<double>(ss::engine().uptime())
.count();
},
sm::description("Redpanda uptime in seconds"))
.aggregate({sm::shard_label}),
sm::make_gauge(
"busy_seconds_total",
[] {
return std::chrono::duration<double>(
ss::engine().total_busy_time())
.count();
},
sm::description("Total CPU busy time in seconds"))
.aggregate({sm::shard_label}),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's worth keeping the shards here

Copy link
Contributor Author

@VladLazar VladLazar Aug 11, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These metrics are only reported from one shard. That's why I aggregated. Just drops the label basically.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

curl -s  localhost:19644/metrics | grep  cpu_busy_ms
# HELP vectorized_reactor_cpu_busy_ms Total cpu busy time in milliseconds
# TYPE vectorized_reactor_cpu_busy_ms counter
vectorized_reactor_cpu_busy_ms{shard="0"} 6268
vectorized_reactor_cpu_busy_ms{shard="1"} 5003

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, you mean this probe is only on one shard? Can you register a metric for each shard?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right. I've confused myself. It's reported from one shard because it's only registered on one shard. We should probably register it on all shards. This means that's probably not the right place for the "busy_time" metric. Let me have a think.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can invoke_on, or submit_to, or create a sharded<probe> and .invoke_on_all, or move it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know. Just feels a bit unnatural to do it there.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Btw. Do we want the uptime for every shard? It would probably be redundant. Busy time for each shard makes sense though.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, that's how it is:

curl -s localhost:19644/metrics | grep uptime
# HELP vectorized_application_uptime Redpanda uptime in milliseconds
# TYPE vectorized_application_uptime gauge
vectorized_application_uptime{shard="0"} 24680.000000

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done. I wrapped the metric_groups object into a ss::sharded. uptime is only reported from the home shard, while busy is reported from all shards.

});
}

void application::setup_internal_metrics() {
namespace sm = ss::metrics;

if (config::shard_local_cfg().disable_metrics()) {
return;
}

// build info
auto version_label = sm::label("version");
auto revision_label = sm::label("revision");
Expand Down
7 changes: 7 additions & 0 deletions src/v/redpanda/application.h
Original file line number Diff line number Diff line change
Expand Up @@ -31,9 +31,11 @@
#include "redpanda/admin_server.h"
#include "resource_mgmt/cpu_scheduling.h"
#include "resource_mgmt/memory_groups.h"
#include "resource_mgmt/scheduling_groups_probe.h"
#include "resource_mgmt/smp_groups.h"
#include "rpc/fwd.h"
#include "seastarx.h"
#include "ssx/metrics.h"
#include "storage/fwd.h"
#include "v8_engine/fwd.h"

Expand Down Expand Up @@ -148,6 +150,8 @@ class application {
}

void setup_metrics();
void setup_public_metrics();
void setup_internal_metrics();
std::unique_ptr<ss::app_template> _app;
bool _redpanda_enabled{true};
cluster::config_manager::preload_result _config_preload;
Expand All @@ -157,6 +161,7 @@ class application {
_schema_reg_config;
std::optional<kafka::client::configuration> _schema_reg_client_config;
scheduling_groups _scheduling_groups;
scheduling_groups_probe _scheduling_groups_probe;
ss::logger _log;

ss::sharded<rpc::connection_cache> _connection_cache;
Expand All @@ -174,6 +179,8 @@ class application {
ss::sharded<archival::upload_controller> _archival_upload_controller;

ss::metrics::metric_groups _metrics;
ss::metrics::metric_groups _public_metrics{
ssx::metrics::public_metrics_handle};
std::unique_ptr<kafka::rm_group_proxy_impl> _rm_group_proxy;
// run these first on destruction
deferred_actions _deferred;
Expand Down
17 changes: 17 additions & 0 deletions src/v/resource_mgmt/cpu_scheduling.h
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,24 @@ class scheduling_groups final {
}
ss::scheduling_group archival_upload() { return _archival_upload; }

std::vector<std::reference_wrapper<const ss::scheduling_group>>
BenPope marked this conversation as resolved.
Show resolved Hide resolved
all_scheduling_groups() const {
return {
std::cref(_default),
std::cref(_admin),
std::cref(_raft),
std::cref(_kafka),
std::cref(_cluster),
std::cref(_coproc),
std::cref(_cache_background_reclaim),
std::cref(_compaction),
std::cref(_raft_learner_recovery),
std::cref(_archival_upload)};
}

private:
ss::scheduling_group _default{
seastar::default_scheduling_group()}; // created and managed by seastar
Comment on lines +84 to +85
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch!

ss::scheduling_group _admin;
ss::scheduling_group _raft;
ss::scheduling_group _kafka;
Expand Down
52 changes: 52 additions & 0 deletions src/v/resource_mgmt/scheduling_groups_probe.h
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
/*
* Copyright 2022 Redpanda Data, Inc.
*
* Use of this software is governed by the Business Source License
* included in the file licenses/BSL.md
*
* As of the Change Date specified in that file, in accordance with
* the Business Source License, use of this software will be governed
* by the Apache License, Version 2.0
*/

#pragma once

#include "cluster/partition_leaders_table.h"
#include "config/configuration.h"
#include "prometheus/prometheus_sanitize.h"
#include "resource_mgmt/cpu_scheduling.h"
#include "ssx/metrics.h"

#include <seastar/core/metrics.hh>

class scheduling_groups_probe {
public:
void wire_up(const scheduling_groups& scheduling_groups) {
if (config::shard_local_cfg().disable_public_metrics()) {
return;
}

auto groups = scheduling_groups.all_scheduling_groups();
for (const auto& group_ref : groups) {
_public_metrics.add_group(
prometheus_sanitize::metrics_name("scheduler"),
{seastar::metrics::make_counter(
"runtime_seconds_total",
[group_ref] {
auto runtime_duration = group_ref.get().get_stats().runtime;
return std::chrono::duration<double>(runtime_duration).count();
},
seastar::metrics::description(
"Accumulated runtime of task queue associated with this "
"scheduling group"),
{ssx::metrics::make_namespaced_label("scheduling_group")(
group_ref.get().name())})});
}
}

void clear() { _public_metrics.clear(); }

private:
seastar::metrics::metric_groups _public_metrics{
ssx::metrics::public_metrics_handle};
};