Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[exporter/clickhouse] "metric type is unset" error while exporting custom created metrics despite the DataType being Gauge #27671

Closed
ceevaaa opened this issue Oct 14, 2023 · 2 comments
Labels
bug Something isn't working exporter/clickhouse needs triage New item requiring triage

Comments

@ceevaaa
Copy link

ceevaaa commented Oct 14, 2023

Component(s)

exporter/clickhouse

What happened?

Description

AIM - To fetch metrics from AWS cloudwatch and convert them into OTLP format and send them to Clickhouse.

The flow looks something like this.
AWS Cloudwatch --> python script to fetch the metrics via Cloudwatch API --> convert to OTLP --> local otel collector --> central otel collector --> Clickhouse

What is working ? (At least I think so)

AWS Cloudwatch - Able to see metrics from the API ✅
Python Script - Able to see the metrics on the STDOUT when using Console exporter (Check the python script for better reference) ✅
Convert to OTLP - Able to see the metrics in the OTLP format ✅
local otel collector - Able to see metrics on the local otel collector logs ✅
central otel collector - Able to see metrics on the local otel collector logs ✅
Clickhouse - FAILS and throw error (check Log output) ❌

Expected Result

Clickhouse Exporter to be able to write the custom metrics.

Actual Result

Throws error -

2023-10-14T09:23:27.077Z        info    exporterhelper/retry_sender.go:177      Exporting failed. Will retry the request after interval.        {"kind": "exporter", "data_type": "metrics", "name": "clickhouse/realtime", "error": "metrics type is unset", "interval": "14.063579629s"}
2023-10-14T09:23:28.024Z        info    MetricsExporter {"kind": "exporter", "data_type": "metrics", "name": "logging", "resource metrics": 1, "metrics": 16, "data points": 152}

Collector version

v0.86.0

Environment information

Environment

The local collector and central collector runs as a docker container on the below underlying host.
NAME="Amazon Linux"
VERSION="2"
ID="amzn"
ID_LIKE="centos rhel fedora"
VERSION_ID="2"
PRETTY_NAME="Amazon Linux 2"
ANSI_COLOR="0;33"
CPE_NAME="cpe:2.3⭕amazon:amazon_linux:2"

OpenTelemetry Collector configuration

No response

Log output

{"kind": "exporter", "data_type": "metrics", "name": "logging"}
2023-10-14T09:23:27.077Z        info    exporterhelper/retry_sender.go:177      Exporting failed. Will retry the request after interval.        {"kind": "exporter", "data_type": "metrics", "name": "clickhouse/realtime", "error": "metrics type is unset", "interval": "14.063579629s"}
2023-10-14T09:23:28.024Z        info    MetricsExporter {"kind": "exporter", "data_type": "metrics", "name": "logging", "resource metrics": 1, "metrics": 16, "data points": 152}
2023-10-14T09:23:28.025Z        info    ResourceMetrics #0
Resource SchemaURL:
Resource attributes:
     -> telemetry.sdk.language: Str(python)
     -> telemetry.sdk.name: Str(opentelemetry)
     -> telemetry.sdk.version: Str(1.16.0)
     -> service.name: Str(aws)
     -> service.version: Str(v1)
ScopeMetrics #0
ScopeMetrics SchemaURL:
InstrumentationScope aws-cloudwatch v1
Metric #0
Descriptor:
     -> Name: ActiveControllerCount
     -> Description: Only one controller per cluster should be active at any given time.
     -> Unit: Count
     -> DataType: Gauge
NumberDataPoints #0
Data point attributes:
     -> namespace: Str(AWS/Kafka)
     -> metric: Str(ActiveControllerCount)
     -> dimensions: Str(Cluster Name=)
     -> aggregation: Str(Average)
     -> label: Str(fas-oac-msk-signup)
     -> region: Str(ap-south-1)
     -> timestamp: Str(2023-10-14T09:15:00Z)
StartTimestamp: 1970-01-01 00:00:00 +0000 UTC
Timestamp: 2023-10-14 09:15:00 +0000 UTC
Value: 0.333333
NumberDataPoints #1
Data point attributes:
     -> namespace: Str(AWS/Kafka)
     -> metric: Str(ActiveControllerCount)
     -> dimensions: Str(Cluster Name=)
     -> aggregation: Str(Average)
     -> label: Str(fas-trd-kfk-cavery)
     -> region: Str(ap-south-1)
     -> timestamp: Str(2023-10-14T09:15:00Z)
StartTimestamp: 1970-01-01 00:00:00 +0000 UTC
Timestamp: 2023-10-14 09:15:00 +0000 UTC
Value: 0.333333
.
.
.
16 other metrics

Additional context

Snippet of Python script being used to generate metrics.

module_name = "aws"
    # default packages
    import json
    import argparse
    import logging
    from functools import partial
    from typing import Tuple
    from cachetools import cached, TTLCache
    from datetime import datetime, timedelta

    # installed packages
    import yaml
    import boto3
    import schedule
    from cerberus import Validator
    from botocore.exceptions import ClientError
    from opentelemetry.sdk.util.instrumentation import InstrumentationScope
    from opentelemetry.exporter.otlp.proto.grpc.metric_exporter import (
        OTLPMetricExporter,
    )
    from opentelemetry.sdk.metrics.export import (
        ConsoleMetricExporter,
        Gauge,      
        NumberDataPoint,
        Metric,     
        MetricsData,
        ScopeMetrics,
        ResourceMetrics,
    )               
    from opentelemetry.sdk.resources import Resource

.
.
.

def export_metrics(callbacks: dict, exporters: list, resource: Resource):
    """
    Exports metrics. 
    :param callbacks: Callbacks to be used to fetch the metrics of the schema:
                        {
                            "metric_name": {
                                "unit": "<unit>",
                                "func": <callback_function>,
                                "description": "<description>"
                            }
                        }
    :param exporters: a list of exporters to use
    :param resource: the OTEL resource object
    """
    metrics = []
    for metric_name, callback_metadata in callbacks.items():
        data_points = list(callback_metadata["func"]())
        if len(data_points) == 0:
            _logger.debug(f"no data points found for metric: {metric_name}")
            continue
        gauge = Gauge(data_points)
        metric = Metric(metric_name, callback_metadata["description"], callback_metadata["unit"], gauge)
        metrics.append(metric)

    scope_metrics = ScopeMetrics(
        scope=InstrumentationScope("aws-cloudwatch", "v1", ""),
        metrics=metrics,
        schema_url=""
    )
    resource_metrics = ResourceMetrics(
        resource=resource,
        scope_metrics=[scope_metrics],
        schema_url=""
    )
    metrics_data = MetricsData([resource_metrics])

    _logger.debug("exporting metrics")
    for exporter in exporters:
        exporter.export(metrics_data)

.
.
.
@ceevaaa ceevaaa added bug Something isn't working needs triage New item requiring triage labels Oct 14, 2023
@github-actions
Copy link
Contributor

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@ceevaaa ceevaaa changed the title [exporter/clickhouse] "metric type is unset" while exporting custom created metrics despite the DataType being Gauge [exporter/clickhouse] "metric type is unset" error while exporting custom created metrics despite the DataType being Gauge Oct 14, 2023
@ceevaaa
Copy link
Author

ceevaaa commented Oct 14, 2023

Turns out the issue is not with my custom script metric data but rather with other receivers (yet to figure out).

currently using these

  1. otlp
  2. hostmetrics
  3. kafkametrics
  4. redis
  5. mysql

Going to close this as of now.

Apologies if someone wasted there time in debugging this.
Thanks and Regards.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working exporter/clickhouse needs triage New item requiring triage
Projects
None yet
Development

No branches or pull requests

1 participant