You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We have a use-case where we run pods with EKS in host network mode and attach a 2nd ENI to the node. The 2nd ENI is tagged with node.k8s.amazonaws.com/no_manage: true and we expected it to be completely left alone by the CNI. However, the iptables rules set up by the CNI force all traffic going out via that ENI to be SNATed and the source IP gets changed to the primary node IP.
Is that the intended behaviour, and if yes, is there a way to disable this? We do not have a NAT gateway running, so using AWS_VPC_K8S_CNI_EXTERNALSNAT = true is not an option for us, as it breaks all other use cases in the cluster.
Thanks in advance, any help is highly appreciated!
Environment:
Kubernetes version (use kubectl version): 1.28
CNI Version v1.15.3-eksbuild.1
OS (e.g: cat /etc/os-release): amazon-eks-node-1.28-v20240514 AMI
The text was updated successfully, but these errors were encountered:
What happened:
Hello!
We have a use-case where we run pods with EKS in host network mode and attach a 2nd ENI to the node. The 2nd ENI is tagged with
node.k8s.amazonaws.com/no_manage: true
and we expected it to be completely left alone by the CNI. However, the iptables rules set up by the CNI force all traffic going out via that ENI to be SNATed and the source IP gets changed to the primary node IP.Is that the intended behaviour, and if yes, is there a way to disable this? We do not have a NAT gateway running, so using
AWS_VPC_K8S_CNI_EXTERNALSNAT = true
is not an option for us, as it breaks all other use cases in the cluster.Thanks in advance, any help is highly appreciated!
Environment:
kubectl version
): 1.28cat /etc/os-release
): amazon-eks-node-1.28-v20240514 AMIThe text was updated successfully, but these errors were encountered: