2020-05-26 09:37:32 UTC | PROCESS | DEBUG | (pkg/util/log/log.go:473 in func1) | Docker init error: temporary failure in dockerutil, will retry later: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? 2020-05-26 09:37:32 UTC | PROCESS | DEBUG | (pkg/util/log/log.go:473 in func1) | Will retry collector docker later: temporary failure in dockerutil, will retry later: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? 2020-05-26 09:37:32 UTC | PROCESS | DEBUG | (pkg/util/log/log.go:473 in func1) | Collector ecs_fargate failed to detect: failed to connect to task metadata API 2020-05-26 09:37:32 UTC | PROCESS | DEBUG | (pkg/util/log/log.go:473 in func1) | Trying to parse kubernetes_kubelet_host: 172.17.0.2 2020-05-26 09:37:32 UTC | PROCESS | DEBUG | (pkg/util/log/log.go:473 in func1) | Parsed kubernetes_kubelet_host: 172.17.0.2 is an address: 172.17.0.2, cached, trying to resolve it to hostname 2020-05-26 09:37:32 UTC | PROCESS | DEBUG | (pkg/util/log/log.go:473 in func1) | kubernetes_kubelet_host: 172.17.0.2 is resolved to: [172-17-0-2.kubernetes.default.svc.cluster.local.] 2020-05-26 09:37:32 UTC | PROCESS | DEBUG | (pkg/util/log/log.go:473 in func1) | Got potential kubelet connection info from config, ips: [172.17.0.2], hostnames: [172-17-0-2.kubernetes.default.svc.cluster.local.] 2020-05-26 09:37:32 UTC | PROCESS | DEBUG | (pkg/util/log/log.go:473 in func1) | Docker init error: temporary failure in dockerutil, will retry later: try delay not elapsed yet 2020-05-26 09:37:32 UTC | PROCESS | DEBUG | (pkg/util/log/log.go:473 in func1) | unable to get hostname from docker, make sure to set the kubernetes_kubelet_host option: temporary failure in dockerutil, will retry later: try delay not elapsed yet 2020-05-26 09:37:32 UTC | PROCESS | DEBUG | (pkg/util/log/log.go:473 in func1) | Got potential kubelet connection info from docker, ips: [], hostnames: [] 2020-05-26 09:37:32 UTC | PROCESS | DEBUG | (pkg/util/log/log.go:473 in func1) | Trying several connection methods to locate the kubelet... 2020-05-26 09:37:32 UTC | PROCESS | DEBUG | (pkg/util/log/log.go:473 in func1) | Trying to use host 172.17.0.2 with HTTPS 2020-05-26 09:37:32 UTC | PROCESS | INFO | (pkg/util/log/log.go:395 in func1) | Skipping TLS verification 2020-05-26 09:37:32 UTC | PROCESS | DEBUG | (pkg/util/log/log.go:385 in func1) | Using HTTPS with service account bearer token 2020-05-26 09:37:32 UTC | PROCESS | DEBUG | (pkg/util/log/log.go:473 in func1) | Trying to query the kubelet endpoint ... 2020-05-26 09:37:32 UTC | PROCESS | INFO | (pkg/util/log/log.go:482 in func1) | Successfully queried https://172.17.0.2:10250/ without any security settings, adding security transport settings to query https://172.17.0.2:10250/pods 2020-05-26 09:37:32 UTC | PROCESS | INFO | (pkg/util/log/log.go:482 in func1) | Successfully connected securely to kubelet endpoint https://172.17.0.2:10250/pods 2020-05-26 09:37:32 UTC | PROCESS | INFO | (pkg/util/log/log.go:482 in func1) | Successfully authorized to query the kubelet on https://172.17.0.2:10250/pods: 200, using https://172.17.0.2:10250 as kubelet endpoint 2020-05-26 09:37:32 UTC | PROCESS | DEBUG | (pkg/util/log/log.go:473 in func1) | Can connect to kubelet using 172.17.0.2 and HTTPS 2020-05-26 09:37:32 UTC | PROCESS | INFO | (pkg/util/log/log.go:482 in func1) | Connection to the kubelet succeeded! 172.17.0.2 is set as kubelet host 2020-05-26 09:37:32 UTC | PROCESS | INFO | (pkg/util/log/log.go:395 in func1) | Skipping TLS verification 2020-05-26 09:37:32 UTC | PROCESS | DEBUG | (pkg/util/log/log.go:385 in func1) | Using HTTPS with service account bearer token 2020-05-26 09:37:32 UTC | PROCESS | DEBUG | (pkg/util/log/log.go:473 in func1) | Kubelet endpoint is: https://172.17.0.2:10250 2020-05-26 09:37:32 UTC | PROCESS | INFO | (pkg/util/log/log.go:482 in func1) | Collector kubelet successfully detected 2020-05-26 09:37:32 UTC | PROCESS | INFO | (pkg/util/log/log.go:482 in func1) | Using collector kubelet 2020-05-26 09:37:32 UTC | PROCESS | DEBUG | (pkg/util/log/log.go:473 in func1) | No ContainerImplementation found for container 9824d2d7986dec0208452eacb0781785528a54f89178959d483e7136a7f7222a in pod datadog-agent-agent-j62h6, skipping 2020-05-26 09:37:32 UTC | PROCESS | DEBUG | (pkg/util/log/log.go:473 in func1) | No ContainerImplementation found for container 8bd426676199dfc2110aaa388477958f8792abfb5c330e08e6cf68b1c8b0957e in pod datadog-agent-agent-j62h6, skipping 2020-05-26 09:37:32 UTC | PROCESS | INFO | (pkg/util/log/log.go:482 in func1) | overriding API key from env DD_API_KEY value 2020-05-26 09:37:32 UTC | PROCESS | INFO | (pkg/process/config/config.go:248 in loadConfigIfExists) | no config exists at /etc/datadog-agent/system-probe.yaml, ignoring... 2020-05-26 09:37:32 UTC | PROCESS | INFO | (pkg/process/config/config.go:413 in loadEnvVariables) | overriding API key from env DD_API_KEY value 2020-05-26 09:37:32 UTC | PROCESS | INFO | (main_common.go:106 in runAgent) | running on platform: linux-4.19.76-linuxkit-x86_64-with-glibc2.2.5 2020-05-26 09:37:32 UTC | PROCESS | INFO | (main_common.go:109 in runAgent) | running version: Version: 7.19.2, Git hash: f6cbd32, Git branch: HEAD, Build date: 2020-05-11T16:04:03, Go Version: go version go1.13.8 linux/amd64, 2020-05-26 09:37:32 UTC | PROCESS | DEBUG | (pkg/tagger/tagger.go:150 in tryCollectors) | ecs_fargate tag collector cannot start: Failed to connect to task metadata API, ECS tagging will not work 2020-05-26 09:37:32 UTC | PROCESS | DEBUG | (pkg/util/docker/global.go:41 in GetDockerUtil) | Docker init error: temporary failure in dockerutil, will retry later: try delay not elapsed yet 2020-05-26 09:37:32 UTC | PROCESS | DEBUG | (pkg/util/ecs/metadata/detection.go:39 in detectAgentV1URL) | Could not inspect ecs-agent container: temporary failure in dockerutil, will retry later: try delay not elapsed yet 2020-05-26 09:37:33 UTC | PROCESS | DEBUG | (pkg/util/ecs/metadata/clients.go:50 in V1) | ECS metadata v1 client init error: temporary failure in ecsutil-meta-v1, will retry later: could not detect ECS agent, tried URLs: [http://10.244.0.1:51678/ http://169.254.172.1:51678/ http://localhost:51678/] 2020-05-26 09:37:33 UTC | PROCESS | DEBUG | (pkg/tagger/tagger.go:146 in tryCollectors) | will retry ecs later: temporary failure in ecsutil-meta-v1, will retry later: could not detect ECS agent, tried URLs: [http://10.244.0.1:51678/ http://169.254.172.1:51678/ http://localhost:51678/] 2020-05-26 09:37:33 UTC | PROCESS | INFO | (pkg/tagger/tagger.go:152 in tryCollectors) | kubelet tag collector successfully started 2020-05-26 09:37:33 UTC | PROCESS | DEBUG | (pkg/util/kubernetes/apiserver/apiserver.go:204 in connect) | Connected to kubernetes apiserver, version v1 2020-05-26 09:37:33 UTC | PROCESS | DEBUG | (pkg/util/kubernetes/apiserver/apiserver.go:210 in connect) | Could successfully collect Pods, Nodes, Services and Events 2020-05-26 09:37:33 UTC | PROCESS | INFO | (pkg/tagger/tagger.go:152 in tryCollectors) | kube-metadata-collector tag collector successfully started 2020-05-26 09:37:33 UTC | PROCESS | DEBUG | (pkg/util/docker/global.go:41 in GetDockerUtil) | Docker init error: temporary failure in dockerutil, will retry later: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? 2020-05-26 09:37:33 UTC | PROCESS | DEBUG | (pkg/tagger/tagger.go:146 in tryCollectors) | will retry docker later: temporary failure in dockerutil, will retry later: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? 2020-05-26 09:37:33 UTC | PROCESS | DEBUG | (main_common.go:139 in runAgent) | Docker is not available on this host 2020-05-26 09:37:33 UTC | PROCESS | DEBUG | (pkg/util/kubernetes/kubelet/podwatcher.go:139 in computeChanges) | Found 10 changed pods out of 10 2020-05-26 09:37:33 UTC | PROCESS | DEBUG | (main_common.go:145 in runAgent) | Running process-agent with DEBUG logging enabled 2020-05-26 09:37:33 UTC | PROCESS | DEBUG | (pkg/util/kubernetes/apiserver/apiserver_kubelet.go:32 in NodeMetadataMapping) | Successfully collected endpoints 2020-05-26 09:37:33 UTC | PROCESS | DEBUG | (pkg/util/kubernetes/apiserver/apiserver_kubelet.go:49 in processKubeServices) | Identified: 1 node, 10 pod, 5 endpoints 2020-05-26 09:37:33 UTC | PROCESS | DEBUG | (pkg/util/kubernetes/apiserver/apiserver_kubelet.go:67 in processKubeServices) | Refreshing cache for agent/KubernetesMetadataMapping/datadog-operator-control-plane 2020-05-26 09:37:33 UTC | PROCESS | DEBUG | (pkg/util/kubernetes/apiserver/services_kubelet.go:98 in MapOnRef) | Empty TargetRef on endpoint 172.17.0.2 of service kubernetes, skipping 2020-05-26 09:37:33 UTC | PROCESS | DEBUG | (pkg/util/network.go:32 in GetNetworkID) | GetNetworkID trying GCE 2020-05-26 09:37:33 UTC | PROCESS | DEBUG | (pkg/util/network.go:39 in GetNetworkID) | GetNetworkID trying EC2 2020-05-26 09:37:33 UTC | PROCESS | INFO | (pkg/process/checks/container.go:43 in Init) | no network ID detected: could not detect network ID 2020-05-26 09:37:33 UTC | PROCESS | DEBUG | (pkg/util/docker/global.go:41 in GetDockerUtil) | Docker init error: temporary failure in dockerutil, will retry later: try delay not elapsed yet 2020-05-26 09:37:33 UTC | PROCESS | DEBUG | (pkg/util/containers/collectors/detector.go:114 in retryCandidates) | Will retry collector docker later: temporary failure in dockerutil, will retry later: try delay not elapsed yet ----------------------------- Results for check container ----------------------------- 2020-05-26 09:37:34 UTC | PROCESS | DEBUG | (pkg/util/docker/global.go:41 in GetDockerUtil) | Docker init error: temporary failure in dockerutil, will retry later: try delay not elapsed yet 2020-05-26 09:37:34 UTC | PROCESS | DEBUG | (pkg/util/containers/collectors/detector.go:114 in retryCandidates) | Will retry collector docker later: temporary failure in dockerutil, will retry later: try delay not elapsed yet 2020-05-26 09:37:34 UTC | PROCESS | DEBUG | (pkg/process/checks/container.go:117 in Run) | collected 11 containers in 17.0224ms { "hostName": "datadog-operator-control-plane", "info": { "os": { "name": "linux", "platform": "debian", "family": "debian", "version": "bullseye/sid", "kernelVersion": "4.19.76-linuxkit" }, "cpus": [ { "vendor": "GenuineIntel", "family": "6", "model": "142", "physicalId": "0", "coreId": "0", "cores": 1, "mhz": 2800, "cacheSize": 8192 }, { "number": 1, "vendor": "GenuineIntel", "family": "6", "model": "142", "physicalId": "1", "coreId": "0", "cores": 1, "mhz": 2800, "cacheSize": 8192 }, { "number": 2, "vendor": "GenuineIntel", "family": "6", "model": "142", "physicalId": "2", "coreId": "0", "cores": 1, "mhz": 2800, "cacheSize": 8192 }, { "number": 3, "vendor": "GenuineIntel", "family": "6", "model": "142", "physicalId": "3", "coreId": "0", "cores": 1, "mhz": 2800, "cacheSize": 8192 } ], "totalMemory": 2086154240 }, "containers": [ { "type": "kubelet", "id": "b92e9163bb3c1ee7043d9bc9dc1f00d3c9818fb186f926adc807d0dc494586f4", "cpuLimit": 100, "state": 3, "health": 2, "created": 1590482488, "wbps": 32768, "netRcvdPs": 49, "netSentPs": 50, "netRcvdBps": 60999, "netSentBps": 68489, "userPct": 1, "systemPct": 2, "totalPct": 3, "memRss": 50028544, "memCache": 8146944, "started": 1590482488, "tags": [ "pod_phase:running", "kube_container_name:etcd", "short_image:etcd", "kube_namespace:kube-system", "image_tag:3.4.3-0", "image_name:k8s.gcr.io/etcd", "pod_name:etcd-datadog-operator-control-plane", "display_container_name:etcd_etcd-datadog-operator-control-plane", "container_id:b92e9163bb3c1ee7043d9bc9dc1f00d3c9818fb186f926adc807d0dc494586f4" ], "threadCount": 16 }, { "type": "kubelet", "id": "de62b1441df21311cb024c3ec2e2f14e565c97625f9b64ed732a170e13408768", "cpuLimit": 10, "memoryLimit": 52428800, "state": 3, "health": 2, "created": 1590482513, "netRcvdPs": 49, "netSentPs": 50, "netRcvdBps": 60999, "netSentBps": 68489, "memRss": 5820416, "memCache": 3215360, "started": 1590482513, "tags": [ "pod_phase:running", "kube_container_name:kindnet-cni", "image_tag:0.5.4", "short_image:kindnetd", "kube_daemon_set:kindnet", "kube_namespace:kube-system", "image_name:kindest/kindnetd", "pod_name:kindnet-wfshz", "container_id:de62b1441df21311cb024c3ec2e2f14e565c97625f9b64ed732a170e13408768", "display_container_name:kindnet-cni_kindnet-wfshz" ], "threadCount": 9 }, { "type": "kubelet", "id": "6c464af90a947b332e55cfc017fa102e08b9394f6da6082308e47bcace4bfd2b", "cpuLimit": 100, "state": 3, "health": 2, "created": 1590482525, "memRss": 5636096, "memCache": 3964928, "started": 1590482525, "tags": [ "short_image:local-path-provisioner", "image_tag:v0.0.12", "image_name:rancher/local-path-provisioner", "kube_namespace:local-path-storage", "pod_phase:running", "kube_deployment:local-path-provisioner", "kube_container_name:local-path-provisioner", "pod_name:local-path-provisioner-774f7f8fdb-8twrj", "kube_replica_set:local-path-provisioner-774f7f8fdb", "container_id:6c464af90a947b332e55cfc017fa102e08b9394f6da6082308e47bcace4bfd2b", "display_container_name:local-path-provisioner_local-path-provisioner-774f7f8fdb-8twrj" ], "threadCount": 14 }, { "type": "kubelet", "id": "d386c56afb016f30a040e4c9c82a5e4ff0fbc767dbd152e6d950091eb7a551a8", "cpuLimit": 100, "memoryLimit": 178257920, "state": 3, "health": 2, "created": 1590482535, "netRcvdPs": 6, "netSentPs": 6, "netRcvdBps": 1178, "netSentBps": 541, "memRss": 13410304, "memCache": 3690496, "started": 1590482535, "tags": [ "short_image:coredns", "image_tag:1.6.7", "kube_namespace:kube-system", "pod_phase:running", "kube_deployment:coredns", "kube_container_name:coredns", "image_name:k8s.gcr.io/coredns", "kube_service:kube-dns", "kube_replica_set:coredns-66bff467f8", "pod_name:coredns-66bff467f8-dp8sf", "display_container_name:coredns_coredns-66bff467f8-dp8sf", "container_id:d386c56afb016f30a040e4c9c82a5e4ff0fbc767dbd152e6d950091eb7a551a8" ], "addresses": [ { "ip": "10.244.0.3", "port": 53, "protocol": 1 }, { "ip": "10.244.0.3", "port": 53 }, { "ip": "10.244.0.3", "port": 9153 } ], "threadCount": 14 }, { "type": "kubelet", "id": "c9b87d7242e5a7af69cf0284a64e6535266b322e080c21b3a9680688c4308556", "cpuLimit": 100, "state": 3, "health": 2, "created": 1590484865, "rbps": 8192, "netRcvdPs": 14, "netSentPs": 15, "netRcvdBps": 1218, "netSentBps": 58063, "userPct": 5, "systemPct": 2, "totalPct": 7, "memRss": 129445888, "memCache": 69632000, "started": 1590484865, "tags": [ "kube_daemon_set:datadog-agent-agent", "kube_container_name:agent", "kube_namespace:datadog", "pod_phase:running", "image_name:datadog/agent", "short_image:agent", "image_tag:latest", "pod_name:datadog-agent-agent-j62h6", "container_id:c9b87d7242e5a7af69cf0284a64e6535266b322e080c21b3a9680688c4308556", "display_container_name:agent_datadog-agent-agent-j62h6" ], "addresses": [ { "ip": "10.244.0.6", "port": 8125, "protocol": 1 } ], "threadCount": 28 }, { "type": "kubelet", "id": "1842229136149ead78dde13336c5e233f736aa665b1048a7d3fde52e755ce4d6", "cpuLimit": 100, "state": 3, "health": 2, "created": 1590484865, "netRcvdPs": 14, "netSentPs": 15, "netRcvdBps": 1218, "netSentBps": 58063, "userPct": 1, "systemPct": 1, "totalPct": 2, "memRss": 18366464, "memCache": 21458944, "started": 1590484865, "tags": [ "image_name:datadog/agent", "image_tag:latest", "kube_namespace:datadog", "pod_phase:running", "kube_daemon_set:datadog-agent-agent", "kube_container_name:process-agent", "short_image:agent", "pod_name:datadog-agent-agent-j62h6", "container_id:1842229136149ead78dde13336c5e233f736aa665b1048a7d3fde52e755ce4d6", "display_container_name:process-agent_datadog-agent-agent-j62h6" ], "threadCount": 15 }, { "type": "kubelet", "id": "c0ffd52cc488a2ad79e9cbdf91eae8957691beb2436caf11f138fd7970639c53", "cpuLimit": 100, "state": 3, "health": 2, "created": 1590482487, "netRcvdPs": 49, "netSentPs": 50, "netRcvdBps": 60999, "netSentBps": 68489, "userPct": 2, "systemPct": 4, "totalPct": 6, "memRss": 356438016, "memCache": 14688256, "started": 1590482487, "tags": [ "short_image:kube-apiserver", "pod_phase:running", "kube_container_name:kube-apiserver", "image_name:k8s.gcr.io/kube-apiserver", "image_tag:v1.18.0", "kube_namespace:kube-system", "pod_name:kube-apiserver-datadog-operator-control-plane", "container_id:c0ffd52cc488a2ad79e9cbdf91eae8957691beb2436caf11f138fd7970639c53", "display_container_name:kube-apiserver_kube-apiserver-datadog-operator-control-plane" ], "threadCount": 16 }, { "type": "kubelet", "id": "a0121eabb4c28b062cbeb3eae0fdf3cf7a9f6df289520e5b64f2ff0b95411d81", "cpuLimit": 100, "state": 3, "health": 2, "created": 1590482487, "netRcvdPs": 49, "netSentPs": 50, "netRcvdBps": 60999, "netSentBps": 68489, "userPct": 1, "systemPct": 2, "totalPct": 3, "memRss": 43765760, "memCache": 9768960, "started": 1590482487, "tags": [ "image_name:k8s.gcr.io/kube-controller-manager", "pod_phase:running", "kube_namespace:kube-system", "kube_container_name:kube-controller-manager", "short_image:kube-controller-manager", "image_tag:v1.18.0", "pod_name:kube-controller-manager-datadog-operator-control-plane", "container_id:a0121eabb4c28b062cbeb3eae0fdf3cf7a9f6df289520e5b64f2ff0b95411d81", "display_container_name:kube-controller-manager_kube-controller-manager-datadog-operator-control-plane" ], "threadCount": 13 }, { "type": "kubelet", "id": "a2606d6219088f942f36c4e9a3f0656018c63e82718b3e838f6152dd1c9a4a7a", "cpuLimit": 100, "state": 3, "health": 2, "created": 1590482487, "netRcvdPs": 49, "netSentPs": 50, "netRcvdBps": 60999, "netSentBps": 68489, "memRss": 23392256, "memCache": 7589888, "started": 1590482487, "tags": [ "image_name:k8s.gcr.io/kube-scheduler", "short_image:kube-scheduler", "kube_namespace:kube-system", "kube_container_name:kube-scheduler", "image_tag:v1.18.0", "pod_phase:running", "pod_name:kube-scheduler-datadog-operator-control-plane", "display_container_name:kube-scheduler_kube-scheduler-datadog-operator-control-plane", "container_id:a2606d6219088f942f36c4e9a3f0656018c63e82718b3e838f6152dd1c9a4a7a" ], "threadCount": 14 }, { "type": "kubelet", "id": "d6ea1795f63f5d36611dacc5030c37758eaae72af3adc870ec1d9b3cd22b5334", "cpuLimit": 100, "state": 3, "health": 2, "created": 1590482513, "netRcvdPs": 49, "netSentPs": 50, "netRcvdBps": 60999, "netSentBps": 68489, "memRss": 8122368, "memCache": 9814016, "started": 1590482513, "tags": [ "pod_phase:running", "image_name:k8s.gcr.io/kube-proxy", "short_image:kube-proxy", "kube_namespace:kube-system", "kube_container_name:kube-proxy", "image_tag:v1.18.0", "kube_daemon_set:kube-proxy", "pod_name:kube-proxy-7jtpk", "container_id:d6ea1795f63f5d36611dacc5030c37758eaae72af3adc870ec1d9b3cd22b5334", "display_container_name:kube-proxy_kube-proxy-7jtpk" ], "threadCount": 12 }, { "type": "kubelet", "id": "e4a1f8c4d4fd1243bd744b7fd707828cebdc8dcc89c454c4b8cea04347bec1f6", "cpuLimit": 100, "memoryLimit": 178257920, "state": 3, "health": 2, "created": 1590482535, "netRcvdPs": 7, "netSentPs": 7, "netRcvdBps": 1244, "netSentBps": 642, "memRss": 14278656, "memCache": 2060288, "started": 1590482535, "tags": [ "kube_namespace:kube-system", "pod_phase:running", "kube_deployment:coredns", "kube_container_name:coredns", "image_name:k8s.gcr.io/coredns", "short_image:coredns", "image_tag:1.6.7", "kube_service:kube-dns", "pod_name:coredns-66bff467f8-6tq9f", "kube_replica_set:coredns-66bff467f8", "display_container_name:coredns_coredns-66bff467f8-6tq9f", "container_id:e4a1f8c4d4fd1243bd744b7fd707828cebdc8dcc89c454c4b8cea04347bec1f6" ], "addresses": [ { "ip": "10.244.0.4", "port": 53, "protocol": 1 }, { "ip": "10.244.0.4", "port": 53 }, { "ip": "10.244.0.4", "port": 9153 } ], "threadCount": 13 } ], "groupId": 1, "groupSize": 1 }