-
Notifications
You must be signed in to change notification settings - Fork 18.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
iptables fails to NAT udp responses #11998
Comments
I can confirm this is a problem. |
@icecrime Could we prioritize this as well? Since we know a workaround, we're not directly affected anymore but imo it's quite embarassing that people all over the web are running into this issue. |
@discordianfish we will take a look at this. |
Hello! Also very interested to have this problem fixed. Steps to reproduce (i'm using DNS as example, but same problems applied to all routing): Dockerfile:
Instance 1:
Instance 2 (trying to connect to DNS)
reply from unexpected source: 172.17.42.1#53 (docker0 interface), expected 192.168.59.103#53 (eth0 interface) From outside of docker host everything works fine:
As the workaround, I currently use binding to eth0 ip instead of 0.0.0.0, but this solution is not flexible, and I would like to see it fixed. Thanks. |
ping @mavenugo interested to hear if there's an update on this? #11998 (comment) |
Related: #8357 |
I've faced this issue while trying to use two containers on a same host machine. I wanted to use a DNS server from a VPN server via host machine IP address. Hope this will be fixed soon. |
Guys, I 've been working with dockers for almost a year, did a lot of task and when finally I thought of bringing docker to prod environment, I hit a major block. UDP and TCP traffic is not reaching my docker containers. This issue is bit strange, on my Test environment I don't see any packets getting drop but on the production servers where I 'm trying to run docker, it is dropping UDP as well as TCP packets. The docker version (1.8) and the host OS (OEL 7) is same in both the environments. All the packets are reaching to my host server but they don't enter my containers, so no issue with the n/w dropping the packets. Has it anything to do with the class of IP? My hosts on my lab setup has an ip range of 172.x.x.x while the production setup has 10.10.x.x. Has the netmask anything to do with this problem? I 'm running out of options to check, any pointers to debug would be helpful here. Thanks |
@ashish235 can you ask your question on https://forums.docker.com or the #docker IRC channel? The issue tracker is not a general support forum |
Any progress on this issue? |
USER POLL The best way to get notified of updates is to use the Subscribe button on this page. Please don't use "+1" or "I have this too" comments on issues. We automatically The people listed below have upvoted this issue by leaving a +1 comment: |
ping @mavenugo PTAL |
As a quick workaround one may make containers query DNS servers via TCP. Systemd drop-in (be sure to check the current
|
Greetings! I was directed here from the mentioned issue right above this post and thought I could provide some analysis. I'm just a guy who knows some networking and likes writing long posts so here we go: To expand on the issue's original errors of getting the internal docker IP instead of the external docker host IP while querying DNS - It sounds a lot like Hairpin NAT aka NAT Reflection. PFSense has a good explanation here. That said, trying to reproduce this on 1.13 RC3 I get timeouts from the services I try to access using the reproduce steps. This maybe because of this docker 1.7.0 changelog note that sounds like a change in default networking? "The userland proxy can be disabled in favor of hairpin NAT using the daemon’s --userland-proxy=false flag" - I'm reading that as Hairpin is no longer default, like PFSense's default. I'd like to re-iterate what the PFSense explanation for hairpin, and argument against it, says: "Split DNS is usually the better way if it is possible on a network because it allows for retaining of the original source IP and avoids unnecessarily looping internal traffic through the firewall." To me the built in equivalent of Split DNS is docker container linking, which provides a DNS name that resolves to the internal destination IP rather than the external destination IP. I've had no problems accessing TCP+UDP services using linked container hostnames as the address for connecting. |
Ancient issue, but seems like this may be related to https://www.weave.works/blog/racy-conntrack-and-dns-lookup-timeouts Interesting read nonetheless. |
Yes, quite possible. Seems like it's still an open issue. See also:
There is also https://blog.quentin-machu.fr/2018/06/24/5-15s-dns-lookups-on-kubernetes/ but the proposed workaround just made it worse in my case. |
This still seems to be an issue? |
Hi,
when running a DNS server in a container and exposing the port without specifying the IP to bind (-p 53:53/udp) the answers don't get translated. I don't have a easy way to reproduce right now, but it seems like this is a common and known problem:
Seems like the workaround is to provide a IP, I'll try this and also try to provide a simple way to reproduce it unless you're already aware of this.
This happened to me on docker 1.5.0
The text was updated successfully, but these errors were encountered: