Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

iptables fails to NAT udp responses #11998

Open
discordianfish opened this issue Apr 1, 2015 · 17 comments
Open

iptables fails to NAT udp responses #11998

discordianfish opened this issue Apr 1, 2015 · 17 comments
Assignees
Labels
area/networking kind/bug Bugs are bugs. The cause may or may not be known at triage time so debugging may be needed.

Comments

@discordianfish
Copy link
Contributor

Hi,

when running a DNS server in a container and exposing the port without specifying the IP to bind (-p 53:53/udp) the answers don't get translated. I don't have a easy way to reproduce right now, but it seems like this is a common and known problem:

Seems like the workaround is to provide a IP, I'll try this and also try to provide a simple way to reproduce it unless you're already aware of this.

This happened to me on docker 1.5.0

@unclejack
Copy link
Contributor

I can confirm this is a problem.

@discordianfish
Copy link
Contributor Author

@icecrime Could we prioritize this as well? Since we know a workaround, we're not directly affected anymore but imo it's quite embarassing that people all over the web are running into this issue.

@unclejack unclejack changed the title iptables failes to NAT udp responses iptables fails to NAT udp responses Apr 6, 2015
@thaJeztah thaJeztah added /system/networking kind/bug Bugs are bugs. The cause may or may not be known at triage time so debugging may be needed. labels Apr 7, 2015
@mavenugo
Copy link
Contributor

mavenugo commented Jun 4, 2015

@discordianfish we will take a look at this.

@VitalyFedyunin
Copy link

Hello!

Also very interested to have this problem fixed.

Steps to reproduce (i'm using DNS as example, but same problems applied to all routing):

Dockerfile:

FROM ubuntu:14.04

ENV \
  DEBIAN_FRONTEND=noninteractive 

RUN \
  apt-get update && \
  apt-get install -q -y dnsmasq dnsutils

RUN echo 'resolv-file=/etc/resolv.dnsmasq.conf' >> /etc/dnsmasq.conf
RUN echo 'conf-dir=/etc/dnsmasq.d'  >> /etc/dnsmasq.conf
RUN echo 'nameserver 8.8.8.8' >> /etc/resolv.dnsmasq.conf
RUN echo 'nameserver 8.8.4.4' >> /etc/resolv.dnsmasq.conf

EXPOSE 53

Instance 1:

docker run -p 53:53/udp -p 53:53/tcp -i dns_test dnsmasq -d

Instance 2 (trying to connect to DNS)

docker run -i dns_test dig @<HOST_EXTERNAL_IP == 192.168.59.103> google.com

;; reply from unexpected source: 172.17.42.1#53, expected 192.168.59.103#53
;; reply from unexpected source: 172.17.42.1#53, expected 192.168.59.103#53
;; reply from unexpected source: 172.17.42.1#53, expected 192.168.59.103#53

; <<>> DiG 9.9.5-3ubuntu0.3-Ubuntu <<>> @192.168.59.103 google.com
; (1 server found)
;; global options: +cmd
;; connection timed out; no servers could be reached

reply from unexpected source: 172.17.42.1#53 (docker0 interface), expected 192.168.59.103#53 (eth0 interface)

From outside of docker host everything works fine:

dig @192.168.59.103 google.com

; <<>> DiG 9.8.3-P1 <<>> @192.168.59.103 google.com
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 26532
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;google.com.            IN  A

;; ANSWER SECTION:
google.com.     191 IN  A   216.58.219.206

As the workaround, I currently use binding to eth0 ip instead of 0.0.0.0, but this solution is not flexible, and I would like to see it fixed.

Thanks.

@thaJeztah
Copy link
Member

ping @mavenugo interested to hear if there's an update on this? #11998 (comment)

@lylepratt
Copy link

Related: #8357

@Kolyunya
Copy link

I've faced this issue while trying to use two containers on a same host machine. I wanted to use a DNS server from a VPN server via host machine IP address. Hope this will be fixed soon.

@ashish235
Copy link

Guys,

I 've been working with dockers for almost a year, did a lot of task and when finally I thought of bringing docker to prod environment, I hit a major block. UDP and TCP traffic is not reaching my docker containers.

This issue is bit strange, on my Test environment I don't see any packets getting drop but on the production servers where I 'm trying to run docker, it is dropping UDP as well as TCP packets. The docker version (1.8) and the host OS (OEL 7) is same in both the environments. All the packets are reaching to my host server but they don't enter my containers, so no issue with the n/w dropping the packets.

Has it anything to do with the class of IP? My hosts on my lab setup has an ip range of 172.x.x.x while the production setup has 10.10.x.x. Has the netmask anything to do with this problem?

I 'm running out of options to check, any pointers to debug would be helpful here.

Thanks
Ashish

@thaJeztah
Copy link
Member

@ashish235 can you ask your question on https://forums.docker.com or the #docker IRC channel? The issue tracker is not a general support forum

@dovahcrow
Copy link

Any progress on this issue?

@GordonTheTurtle
Copy link

USER POLL

The best way to get notified of updates is to use the Subscribe button on this page.

Please don't use "+1" or "I have this too" comments on issues. We automatically
collect those comments to keep the thread short.

The people listed below have upvoted this issue by leaving a +1 comment:

@artemkaint

@thaJeztah
Copy link
Member

ping @mavenugo PTAL

@KostyaEsmukov
Copy link

As a quick workaround one may make containers query DNS servers via TCP.
In order to do this the docker daemon should be run with the --dns-opt="use-vc" option (see man 5 resolv.conf).

Systemd drop-in (be sure to check the current ExecStart value first with the systemctl cat docker.service | grep ExecStart):

mkdir -p /lib/systemd/system/docker.service.d

cat > /lib/systemd/system/docker.service.d/00-dns-tcp.conf << EOF
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H fd:// --dns-opt="use-vc"
EOF

systemctl daemon-reload
systemctl restart docker.service

@diginc
Copy link

diginc commented Dec 21, 2016

Greetings! I was directed here from the mentioned issue right above this post and thought I could provide some analysis. I'm just a guy who knows some networking and likes writing long posts so here we go:

To expand on the issue's original errors of getting the internal docker IP instead of the external docker host IP while querying DNS - It sounds a lot like Hairpin NAT aka NAT Reflection. PFSense has a good explanation here.

That said, trying to reproduce this on 1.13 RC3 I get timeouts from the services I try to access using the reproduce steps. This maybe because of this docker 1.7.0 changelog note that sounds like a change in default networking? "The userland proxy can be disabled in favor of hairpin NAT using the daemon’s --userland-proxy=false flag" - I'm reading that as Hairpin is no longer default, like PFSense's default.

I'd like to re-iterate what the PFSense explanation for hairpin, and argument against it, says: "Split DNS is usually the better way if it is possible on a network because it allows for retaining of the original source IP and avoids unnecessarily looping internal traffic through the firewall."

To me the built in equivalent of Split DNS is docker container linking, which provides a DNS name that resolves to the internal destination IP rather than the external destination IP. I've had no problems accessing TCP+UDP services using linked container hostnames as the address for connecting.

@cpuguy83
Copy link
Member

Ancient issue, but seems like this may be related to https://www.weave.works/blog/racy-conntrack-and-dns-lookup-timeouts

Interesting read nonetheless.

@discordianfish
Copy link
Contributor Author

Yes, quite possible. Seems like it's still an open issue. See also:

There is also https://blog.quentin-machu.fr/2018/06/24/5-15s-dns-lookups-on-kubernetes/ but the proposed workaround just made it worse in my case.

@max06
Copy link

max06 commented Mar 22, 2022

;; reply from unexpected source: 172.17.0.1#5053, expected 192.168.27.10#5053 

This still seems to be an issue?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/networking kind/bug Bugs are bugs. The cause may or may not be known at triage time so debugging may be needed.
Projects
None yet
Development

No branches or pull requests