Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Dashboards yield blank panel: Metric was not found #36

Open
wuetz opened this issue Feb 11, 2022 · 2 comments
Open

[BUG] Dashboards yield blank panel: Metric was not found #36

wuetz opened this issue Feb 11, 2022 · 2 comments
Labels
bug Something isn't working wontfix This will not be worked on

Comments

@wuetz
Copy link

wuetz commented Feb 11, 2022

What is the bug?
The dashboards are shown, but they are empty / blank, so no data is shown.
2022-02-11_13h39_11

How can one reproduce the bug?
Steps to reproduce the behavior:

  1. Install docker image opensearch:1.2.3 and enable port 9600 for the performance analyzer
  2. Test that the performance analyzer is responding:
curl "localhost:9600/_plugins/_performanceanalyzer/metrics/units"
{"Disk_Utilization":"%","Cache_Request_Hit":"count","Segments_Memory":"B","Refresh_Time":"ms","ThreadPool_QueueLatency":"count","Merge_Time":"ms","ClusterApplierService_Latency":"ms","PublishClusterState_Latency":"ms","Cache_Request_Size":"B","LeaderCheck_Failure":"count","ThreadPool_QueueSize":"count","Sched_Runtime":"s/ctxswitch","Disk_ServiceRate":"MB/s","Heap_AllocRate":"B/s","Indexing_Pressure_Current_Limits":"B","Sched_Waittime":"s/ctxswitch","ShardBulkDocs":"count","Thread_Blocked_Time":"s/event","VersionMap_Memory":"B","Master_Task_Queue_Time":"ms","IO_TotThroughput":"B/s","Indexing_Pressure_Current_Bytes":"B","Indexing_Pressure_Last_Successful_Timestamp":"ms","Net_PacketRate6":"packets/s","Cache_Query_Hit":"count","IO_ReadSyscallRate":"count/s","Net_PacketRate4":"packets/s","Cache_Request_Miss":"count","ThreadPool_RejectedReqs":"count","Net_TCP_TxQ":"segments/flow","Master_Task_Run_Time":"ms","IO_WriteSyscallRate":"count/s","IO_WriteThroughput":"B/s","Refresh_Event":"count","Flush_Time":"ms","Heap_Init":"B","Indexing_Pressure_Rejection_Count":"count","CPU_Utilization":"cores","Cache_Query_Size":"B","Merge_Event":"count","DocValues_Memory":"B","Cache_FieldData_Eviction":"count","IO_TotalSyscallRate":"count/s","Net_Throughput":"B/s","Paging_RSS":"pages","AdmissionControl_ThresholdValue":"count","Indexing_Pressure_Average_Window_Throughput":"count/s","Cache_MaxSize":"B","IndexWriter_Memory":"B","Net_TCP_SSThresh":"B/flow","IO_ReadThroughput":"B/s","LeaderCheck_Latency":"ms","FollowerCheck_Failure":"count","TermVectors_Memory":"B","HTTP_RequestDocs":"count","Net_TCP_Lost":"segments/flow","GC_Collection_Event":"count","Sched_CtxRate":"count/s","AdmissionControl_RejectionCount":"count","Heap_Max":"B","ClusterApplierService_Failure":"count","PublishClusterState_Failure":"count","Merge_CurrentEvent":"count","Indexing_Buffer":"B","Bitset_Memory":"B","Norms_Memory":"B","Net_PacketDropRate4":"packets/s","Heap_Committed":"B","Net_PacketDropRate6":"packets/s","Thread_Blocked_Event":"count","GC_Collection_Time":"ms","Cache_Query_Miss":"count","Latency":"ms","Shard_State":"count","Thread_Waited_Event":"count","CB_ConfiguredSize":"B","ThreadPool_QueueCapacity":"count","CB_TrippedEvents":"count","Disk_WaitTime":"ms","Data_RetryingPendingTasksCount":"count","AdmissionControl_CurrentValue":"count","Flush_Event":"count","Net_TCP_RxQ":"segments/flow","Points_Memory":"B","Shard_Size_In_Bytes":"B","Thread_Waited_Time":"s/event","HTTP_TotalRequests":"count","ThreadPool_ActiveThreads":"count","Paging_MinfltRate":"count/s","Net_TCP_SendCWND":"B/flow","Cache_Request_Eviction":"count","Segments_Total":"count","FollowerCheck_Latency":"ms","Terms_Memory":"B","Heap_Used":"B","Master_ThrottledPendingTasksCount":"count","CB_EstimatedSize":"B","Indexing_ThrottleTime":"ms","StoredFields_Memory":"B","Master_PendingQueueSize":"count","Cache_FieldData_Size":"B","Paging_MajfltRate":"count/s","ThreadPool_TotalThreads":"count","ShardEvents":"count","Net_TCP_NumFlows":"count","Election_Term":"count"}
  1. start opeftop: opensearch-perf-top-linux --endpoint http://localhost:9600 --dashboard NodeAnalysis --logfile /var/log/opensearch-perf-top-linux.log --nodename opensearch-cluster
  2. observe the logfile /var/log/opensearch-perf-top-linux.log for these errors:
No matches for nodeName=opensearch-cluster
Metric was not found for request with queryParams:

        endpoint: http://localhost:9600

        metrics: Net_PacketDropRate4

        agg:sum

        dim:Direction
No matches for nodeName=opensearch-cluster
Metric was not found for request with queryParams:

        endpoint: http://localhost:9600

        metrics: CPU_Utilization

        agg:sum

        dim:Operation
Data returned for nodeName=local was in an unexpected format:
        {"timestamp":1644583150000,"data":{}}
No matches for nodeName=opensearch-cluster
Metric was not found for request with queryParams:

        endpoint: http://localhost:9600

        metrics: Net_PacketDropRate6

        agg:sum

        dim:Direction
No matches for nodeName=opensearch-cluster
Data returned for nodeName=local was in an unexpected format:
        {"timestamp":1644583150000,"data":{}}
No matches for nodeName=opensearch-cluster
Data returned for nodeName=local was in an unexpected format:
        {"timestamp":1644583150000,"data":{}}
  1. manual testing
    also the manual curl for e.g. Net_PacketDropRate4 is returning a more or less empty json:
curl "localhost:9600/_plugins/_performanceanalyzer/metrics?metrics=Net_PacketDropRate4&agg=avg&dim=Direction&nodes=all"

response:

{"local": {"timestamp": 1644583520000, "data": {}}}
curl "localhost:9600/_plugins/_performanceanalyzer/metrics?metrics=CPU_Utilization&agg=avg&dim=Operation&nodes=all"

response:

{"local": {"timestamp": 1644583710000, "data": {"fields":[{"name":"Operation","type":"VARCHAR"},{"name":"CPU_Utilization","type":"DOUBLE"}],"records":[]}}}
curl "localhost:9600/_plugins/_performanceanalyzer/metrics?metrics=Net_PacketDropRate6&agg=avg&dim=Direction&nodes=all"
{"local": {"timestamp": 1644583650000, "data": {}}}

response:

{"local": {"timestamp": 1644583765000, "data": {}}}

What is the expected behavior?
a working perftop

What is your host/environment?

  • OS: SLES 15 SP3
  • Version opensearch docker image 1.2.3
@wuetz wuetz added bug Something isn't working untriaged labels Feb 11, 2022
@toby181
Copy link

toby181 commented Apr 22, 2022

Did you enable the plugin? See https://opensearch.org/docs/latest/opensearch/install/docker/#configure-opensearch -> (Optional) Set up Performance Analyzer

@kkhatua
Copy link
Member

kkhatua commented Nov 3, 2022

We're deprioritizing this in favor of moving to a Web-based UI, but will leave the issue as open.
Feel free to create a pull-request if this is critical and the above steps for configuring doesn't work.

@kkhatua kkhatua added wontfix This will not be worked on and removed untriaged labels Nov 3, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working wontfix This will not be worked on
Projects
None yet
Development

No branches or pull requests

3 participants