We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hi,
I have installed neuvector on a K3D cluster with the following helm options:
helm upgrade neuvector neuvector/core \ --namespace neuvector \ --install \ --create-namespace \ --version 2.2.2 \ --set tag=5.0.2 \ --set registry=docker.io \ --set k3s.enabled=true \ --set manager.ingress.enabled=true \ --set manager.ingress.ingressClassName="nginx" \ --set manager.svc.type="ClusterIP" \ --set manager.ingress.host="neuvector.internal.xxxxxxxx.com"
The neuvector-enforcer pods are all restarting, in the logs there are some errors:
2022-09-19T18:52:57|MON|/usr/local/bin/monitor starts, pid=25242 net.core.somaxconn = 1024 net.unix.max_dgram_qlen = 64 Check TC kernel module ... module act_mirred not find. module act_pedit not find. 2022-09-19T18:52:57|MON|Start dp, pid=25275 2022-09-19T18:52:57|MON|Start agent, pid=25277 1970-01-01T00:00:00|DEBU||dpi_dlp_init: enter 1970-01-01T00:00:00|DEBU||dpi_dlp_register_options: enter 1970-01-01T00:00:00|DEBU||net_run: enter 1970-01-01T00:00:00|DEBU|cmd|dp_ctrl_loop: enter 2022-09-19T18:52:57|DEBU|dlp|dp_bld_dlp_thr: dp bld_dlp thread starts 2022-09-19T18:52:57|DEBU|dp0|dpi_frag_init: enter 2022-09-19T18:52:57|DEBU|dp0|dpi_session_init: enter 2022-09-19T18:52:57|DEBU|dp0|dp_data_thr: dp thread starts 2022-09-19T18:52:57.589|INFO|AGT|main.main: START - version=v5.0.2 2022-09-19T18:52:57.594|INFO|AGT|main.main: - bind=192.168.181.217 2022-09-19T18:52:57.614|INFO|AGT|system.NewSystemTools: cgroup v2 2022-09-19T18:52:57.615|INFO|AGT|container.Connect: - endpoint= 2022-09-19T18:52:57.659|WARN|AGT|container.parseEndpointWithFallbackProtocol: no error unix /run/containerd/containerd.sock. 2022-09-19T18:52:57.673|INFO|AGT|container.containerdConnect: cri - version=&VersionResponse{Version:0.1.0,RuntimeName:containerd,RuntimeVersion:v1.6.6-k3s1,RuntimeApiVersion:v1alpha2,} 2022-09-19T18:52:57.734|INFO|AGT|container.containerdConnect: containerd connected - endpoint=/run/containerd/containerd.sock version={Version:v1.6.6-k3s1 Revision:} 2022-09-19T18:52:58.126|ERRO|AGT|orchestration.getVersion: - code=401 tag=k8s 2022-09-19T18:52:58.148|ERRO|AGT|orchestration.getVersion: - code=401 tag=oc 2022-09-19T18:52:58.157|ERRO|AGT|orchestration.getVersion: - code=403 tag=oc 2022-09-19T18:52:58.16 |INFO|AGT|workerlet.NewWalkerTask: - showDebug=false 2022-09-19T18:52:58.161|INFO|AGT|main.main: Container socket connected - endpoint= runtime=containerd 2022-09-19T18:52:58.162|INFO|AGT|main.main: - k8s=1.24.4+k3s1 oc= 2022-09-19T18:52:58.162|INFO|AGT|main.main: PROC: - shield=true 2022-09-19T18:52:58.179|ERRO|AGT|system.(*SystemTools).NsRunBinary: - error=exit status 255 msg= 2022-09-19T18:52:58.18 |ERRO|AGT|main.getHostAddrs: Error getting host IP - error=exit status 255 2022-09-19T18:52:58.18 |INFO|AGT|main.parseHostAddrs: - maxMTU=0 2022-09-19T18:52:58.277|ERRO|AGT|container.(*containerdDriver).GetContainer: Failed to get container image config - error=content digest sha256:c2280d2f5f56cf9c9a01bb64b2db4651e35efd6d62a54dcfc12049fe6449c5e4: not found id=eff4af6d37fd4fba2c37c838de8aac8e1beddcd7fe75343468e70607031b7a0b 2022-09-19T18:52:58.279|INFO|AGT|main.main: - hostIPs={} 2022-09-19T18:52:58.28 |INFO|AGT|main.main: - host={ID:k3d-Test-Wilco-agent-1: Name:k3d-Test-Wilco-agent-1 Runtime:containerd Platform:Kubernetes Flavor: Network:Default RuntimeVer:v1.6.6-k3s1 RuntimeAPIVer:v1.6.6-k3s1 OS:K3s dev Kernel:5.10.104-linuxkit CPUs:5 Memory:8232370176 Ifaces:map[] TunnelIP:[] CapDockerBench:false CapKubeBench:true StorageDriver:overlayfs CgroupVersion:2} 2022-09-19T18:52:58.28 |INFO|AGT|main.main: - agent={CLUSDevice:{ID:e62cf8e783c188001a5458df1352fc1b5fb168815eb1663f021452acbe0da972 Name:k8s_neuvector-enforcer-pod_neuvector-enforcer-pod-z5xcq_neuvector_c9e201e4-b4da-45b1-8dd1-2e30b17d54ac_2 SelfHostname: HostName:k3d-Test-Wilco-agent-1 HostID:k3d-Test-Wilco-agent-1: Domain:neuvector NetworkMode:/proc/23276/ns/net PidMode:host Ver:v5.0.2 Labels:map[io.cri-containerd.image:managed io.cri-containerd.kind:container io.kubernetes.container.name:neuvector-enforcer-pod io.kubernetes.pod.name:neuvector-enforcer-pod-z5xcq io.kubernetes.pod.namespace:neuvector io.kubernetes.pod.uid:c9e201e4-b4da-45b1-8dd1-2e30b17d54ac name:enforcer neuvector.image:neuvector/enforcer neuvector.rev:a552e5e neuvector.role:enforcer release:5.0.2 vendor:NeuVector Inc. version:5.0.2] CreatedAt:2022-09-19 18:52:56.937028472 +0000 UTC StartedAt:2022-09-19 18:52:56.937028472 +0000 UTC JoinedAt:0001-01-01 00:00:00 +0000 UTC MemoryLimit:0 CPUs: ClusterIP: RPCServerPort:0 Pid:25242 Ifaces:map[eth0:[{IPNet:{IP:192.168.181.217 Mask:ffffffff} Gateway: Scope:global NetworkID: NetworkName:}]]}} 2022-09-19T18:52:58.292|INFO|AGT|main.main: - jumboframe=false pipeType=no_tc 2022-09-19T18:52:58.293|INFO|AGT|cluster.FillClusterAddrs: - advertise=192.168.181.217 join=neuvector-svc-controller.neuvector 2022-09-19T18:52:58.311|INFO|AGT|cluster.(*consulMethod).Start: - config=&{ID:e62cf8e783c188001a5458df1352fc1b5fb168815eb1663f021452acbe0da972 Server:false Debug:false Ifaces:map[eth0:[{IPNet:{IP:192.168.181.217 Mask:ffffffff} Gateway: Scope:global NetworkID: NetworkName:}]] JoinAddr:neuvector-svc-controller.neuvector joinAddrList:[192.168.181.219 192.168.249.84 192.168.102.21] BindAddr:192.168.181.217 AdvertiseAddr:192.168.181.217 DataCenter:neuvector RPCPort:0 LANPort:0 WANPort:0 EnableDebug:false} recover=false 2022-09-19T18:52:58.313|INFO|AGT|cluster.(*consulMethod).Start: - node-id=6fa06cea-8c38-97be-277d-8a2d3ff3f27e 2022-09-19T18:52:58.315|INFO|AGT|cluster.(*consulMethod).Start: Consul start - args=[agent -datacenter neuvector -data-dir /tmp/neuvector -config-file /tmp/consul.json -bind 192.168.181.217 -advertise 192.168.181.217 -node 192.168.181.217 -node-id 6fa06cea-8c38-97be-277d-8a2d3ff3f27e -raft-protocol 3 -retry-join 192.168.181.219 -retry-join 192.168.249.84 -retry-join 192.168.102.21] ==> Starting Consul agent... Version: '1.11.3' Node ID: '6fa06cea-8c38-97be-277d-8a2d3ff3f27e' Node name: '192.168.181.217' Datacenter: 'neuvector' (Segment: '') Server: false (Bootstrap: false) Client Addr: [127.0.0.1] (HTTP: 8500, HTTPS: -1, gRPC: -1, DNS: -1) Cluster Addr: 192.168.181.217 (LAN: 18301, WAN: -1) Encrypt: Gossip: true, TLS-Outgoing: true, TLS-Incoming: true, Auto-Encrypt-TLS: false ==> Log data will now stream in as it occurs: 2022-09-19T18:52:59.577Z [WARN] agent: Node name "192.168.181.217" will not be discoverable via DNS due to invalid characters. Valid characters include all alpha-numerics and dashes. 2022-09-19T18:52:59.631Z [WARN] agent.auto_config: Node name "192.168.181.217" will not be discoverable via DNS due to invalid characters. Valid characters include all alpha-numerics and dashes. 2022-09-19T18:52:59.657Z [INFO] agent.client.serf.lan: serf: EventMemberJoin: 192.168.181.217 192.168.181.217 2022-09-19T18:52:59.660Z [INFO] agent.router: Initializing LAN area manager 2022-09-19T18:52:59.667Z [INFO] agent: Starting server: address=127.0.0.1:8500 network=tcp protocol=http 2022-09-19T18:52:59.668Z [WARN] agent: DEPRECATED Backwards compatibility with pre-1.9 metrics enabled. These metrics will be removed in a future version of Consul. Set `telemetry { disable_compat_1.9 = true }` to disable them. 2022-09-19T18:52:59.669Z [INFO] agent: Retry join is supported for the following discovery methods: cluster=LAN discovery_methods="aliyun aws azure digitalocean gce k8s linode mdns os packet scaleway softlayer tencentcloud triton vsphere" 2022-09-19T18:52:59.669Z [INFO] agent: Joining cluster...: cluster=LAN 2022-09-19T18:52:59.669Z [INFO] agent: (LAN) joining: lan_addresses=[192.168.181.219, 192.168.249.84, 192.168.102.21] 2022-09-19T18:52:59.680Z [INFO] agent: started state syncer 2022-09-19T18:52:59.680Z [INFO] agent: Consul agent running! 2022-09-19T18:52:59.683Z [WARN] agent.router.manager: No servers available 2022-09-19T18:52:59.686Z [ERROR] agent.anti_entropy: failed to sync remote state: error="No known Consul servers" 2022-09-19T18:52:59.688Z [INFO] agent.client.serf.lan: serf: EventMemberJoin: 192.168.249.84 192.168.249.84 2022-09-19T18:52:59.690Z [INFO] agent.client: adding server: server="192.168.249.84 (Addr: tcp/192.168.249.84:18300) (DC: neuvector)" 2022-09-19T18:52:59.719Z [INFO] agent.client.serf.lan: serf: EventMemberJoin: 192.168.102.21 192.168.102.21 2022-09-19T18:52:59.720Z [INFO] agent.client.serf.lan: serf: EventMemberJoin: 192.168.181.219 192.168.181.219 2022-09-19T18:52:59.720Z [WARN] agent.client.memberlist.lan: memberlist: Refuting a dead message (from: 192.168.181.217) 2022-09-19T18:52:59.721Z [INFO] agent.client: adding server: server="192.168.102.21 (Addr: tcp/192.168.102.21:18300) (DC: neuvector)" 2022-09-19T18:52:59.721Z [INFO] agent.client: adding server: server="192.168.181.219 (Addr: tcp/192.168.181.219:18300) (DC: neuvector)" 2022-09-19T18:52:59.771Z [INFO] agent: (LAN) joined: number_of_nodes=3 2022-09-19T18:52:59.771Z [INFO] agent: Join cluster completed. Synced with initial agents: cluster=LAN num_agents=3 2022-09-19T18:53:00.895Z [INFO] agent.client.serf.lan: serf: EventMemberJoin: 192.168.249.83 192.168.249.83 2022-09-19T18:53:02.076Z [INFO] agent: Synced node info
Any idea why and how to fix it? Thanks
The text was updated successfully, but these errors were encountered:
Anybody?
Sorry, something went wrong.
No branches or pull requests
Hi,
I have installed neuvector on a K3D cluster with the following helm options:
The neuvector-enforcer pods are all restarting, in the logs there are some errors:
Any idea why and how to fix it? Thanks
The text was updated successfully, but these errors were encountered: