Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DO Proxy Protocol broken header #3996

Closed
dottodot opened this issue Apr 11, 2019 · 44 comments · Fixed by #5474
Closed

DO Proxy Protocol broken header #3996

dottodot opened this issue Apr 11, 2019 · 44 comments · Fixed by #5474
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@dottodot
Copy link

Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT

NGINX Ingress controller version:
0.24.0

Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.3", GitCommit:"721bfa751924da8d1680787490c54b9179b1fed0", GitTreeState:"clean", BuildDate:"2019-02-04T04:48:03Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.1", GitCommit:"eec55b9ba98609a46fee712359c7b5b365bdd920", GitTreeState:"clean", BuildDate:"2018-12-13T10:31:33Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Cloud provider or hardware configuration: Digital Ocean

What happened:
Digital Ocean now allows for use of Proxy Protocol
https://www.digitalocean.com/docs/kubernetes/how-to/configure-load-balancers/#proxy-protocol
So I've added the annotation to my service

kind: Service
apiVersion: v1
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
  annotations:
    service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: "true"

and updated my config as follows

kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
data:
  use-proxy-protocol: "true"
  enable-brotli: "true"
  enable-vts-status: "true"

However once I've applied these changes I get lots of errors such as the following

6���Zك7�g̮["\�/�+�0�,��'g�(k��̨̩̪������������������$j�#@�
�98" while reading PROXY protocol, client: 10.244.35.0, server: 0.0.0.0:443
2019/04/11 13:02:57 [error] 265#265: *4443 broken header: "����p�����ўL��k+
rbO-
/�Ç���y�8\�/�+�0�,��'g�(k��̨̩̪������������������$j�#@�
�98" while reading PROXY protocol, client: 10.244.41.0, server: 0.0.0.0:443
2019/04/11 13:02:57 [error] 265#265: 4444 broken header: "���5�Kk��4 ��b�pxLJw�]��G�V��� �
\�/�+�0�,��'g�(k��̨̩̪������������������$j�#@�
�98" while reading PROXY protocol, client: 10.244.41.0, server: 0.0.0.0:443

Digital Ocean's response to these error was

This type of response is typically caused by Nginx not properly accepting the PROXY protocol. You should be able to simply append proxy_protocol to your listen directive in your server definition. More information on this can be seen in Nginx's documentation available here:

https://docs.nginx.com/nginx/admin-guide/load-balancer/using-proxy-protocol/

I can't see what I'm missing

What you expected to happen:
No errors

@aledbf
Copy link
Member

aledbf commented Apr 11, 2019

However once I've applied these changes I get lots of errors such as the following
6���Zك7�g̮["\�/�+�0�,��'g�(k��̨̩̪������������������$j�#@�

It seems HTTPS traffic is being sent to the HTTP port, please check the port mapping in the ingres-nginx service and your DO console

@dottodot
Copy link
Author

@aledbf my service ports are

  ports:
  - name: http
    nodePort: 32342
    port: 80
    protocol: TCP
    targetPort: http
  - name: https
    nodePort: 31346
    port: 443
    protocol: TCP
    targetPort: https

and the port forwarding on my load balancer is set to
TCP on port 80 > TCP on port 32342
TCP on port 443 > TCP on port 31346

@Routhinator
Copy link

Can confirm this issue, same configuration here across the board.

@tlaverdure
Copy link

Looking for a solution to this as well. All public requests work but internal traffic to a host of the ingress fail.

2019/05/23 19:59:51 [error] 411#411: *870261 broken header: "��*EpS�M;I��K��WT}�:^ͼ�0���0�,�(�$��
����kjih9876�����2�.�*�&���=5" while reading PROXY protocol, client: 10.244.2.1, server: 0.0.0.0:443

@dperetti
Copy link

dperetti commented May 23, 2019

It works for me using the helm chart with the following values:

## nginx configuration
## https://github.com/helm/charts/blob/master/stable/nginx-ingress/values.yaml

## https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#use-proxy-protocol
## https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-type-loadbalancer
controller:
  config:
    use-proxy-protocol: "true"

  service:
    externalTrafficPolicy: "Local"
    annotations:
      # https://www.digitalocean.com/docs/kubernetes/how-to/configure-load-balancers/
      # https://github.com/digitalocean/digitalocean-cloud-controller-manager/blob/master/docs/controllers/services/annotations.md
      service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: "true"

@tlaverdure
Copy link

I've got a similar setup. Which is working if I access the host publicly. However, accessing the host from within the cluster seems to fail (i.e. a server side request from one pod to another pod using the host)

Ingress

kind: Ingress
apiVersion: extensions/v1beta1
metadata:
  name: ingress
  annotations:
    certmanager.k8s.io/cluster-issuer: letsencrypt
    kubernetes.io/ingress.class: nginx
    service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: "true"
spec:
  tls:
  - hosts:
    - 'example.com'
    - '*.example.com'
    secretName: example-tls
  rules:
  - host: example.com
    http:
      paths:
      - backend:
          serviceName: example-app
          servicePort: 80
  - host: api.example.com
    http:
      paths:
      - backend:
          serviceName: example-backend
          servicePort: 80
...

Config

kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
data:
  compute-full-forwarded-for: "true"
  use-forwarded-headers: "true"
  use-proxy-protocol: "true"

Curl Example Error

$ curl -v https://api.example.com
* Rebuilt URL to: https://api.example.com/
*   Trying (123.456.789.000...
* TCP_NODELAY set
* Connected to api.example.com (123.456.789.000) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/certs/ca-certificates.crt
  CApath: /etc/ssl/certs
* TLSv1.2 (OUT), TLS header, Certificate Status (22):
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* Unknown SSL protocol error in connection to api.example.com:443 
* Curl_http_done: called premature == 1
* stopped the pause stream!
* Closing connection 0
curl: (35) Unknown SSL protocol error in connection to api.example.com:443 

Log from failed request

[error] 661#661: *981009 broken header: "�/�9��ނ���R�6ަ�@%Qe�lG�3.���0�,�(�$��
����kjih9876�����2�.�*�&���=5" while reading PROXY protocol, client: 10.244.3.1, server: 0.0.0.0:443

@dperetti
Copy link

I don't think service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: "true" should be set in the Ingress annotation. It has probably no effect here.
My understanding is it must be set on the nginx-ingress service. I think it simply tells DO to activate the "Use Proxy Protocol" setting when the load balancer is created.

@aledbf
Copy link
Member

aledbf commented May 24, 2019

However, accessing the host from within the cluster seems to fail (i.e. a server side request from one pod to another pod using the host)

@tlaverdure that's because you are no specifying the flag --haproxy-protocol in the curl command. If you enable proxy protocol in the ingress controller you need to decode it.

@aledbf
Copy link
Member

aledbf commented May 24, 2019

I don't think service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: "true" should be set in the Ingress annotation. It has probably no effect here.

That is correct.

@dottodot
Copy link
Author

@dperetti I have service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: "true" on the nginx-ingress service and get the same issue as @tlaverdure

@aledbf I'm not using curl command but get the same errors.

@dperetti
Copy link

I also have: externalTrafficPolicy: Local on the nginx-ingress-controller service and of course use-proxy-protocol: "true" in the configmap.
My Kubernetes version is 1.14.1-do.2 .

@tlaverdure
Copy link

@dperetti thanks for the tip. I think I added it initially when testing and forgot to remove after I added that annotation to the ingress-nginx service.

@aledbf I'm experiencing this with any type of server side http request. Curl was used to verify the issue, but any server side scripting language that makes a http request to the host (i.e. node) is failing.

@dottodot
Copy link
Author

I also have externalTrafficPolicy: Local so don't think that's related.

@dperetti
Copy link

Stupid check, but if you go to the load balancer's settings in the DO admin panel, it's enabled, right?
Because if it's not while use-proxy-protocol = true you end up with the same kind of encoding mismatch.
image

@tlaverdure
Copy link

Yes, Proxy Protocol is enabled.

@tlaverdure
Copy link

tlaverdure commented May 24, 2019

Just tested setting use-proxy-protocol: "false" in the ConfigMap. This kills my ability to reach the host externally but allows me to access the host within the cluster.

@dottodot
Copy link
Author

OK I've had some advise back from Digital Ocean

Current options for workaround are to have pods access other DOKS services through their resolvable service names or by using the desired services clusterIP.

Using the service hostname as described below or the service cluster IP could be used for traffic originating inside the cluster.
kubectl get svc

Will return a list of your services and their clusterIP. You can use this IP to reach your service within the cluster. Note this IP will only work from within the cluster.

You can find documentation for accessing services properly in kubernetes here: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#services

The usage for accessing a service is: my-svc.my-namespace.svc.cluster.local:443

Using either of these methods will mean that that traffic no longer needs to go outside the traffic and interact with the proxy and can just get direct access to the service.

Only problem is I'm not entirely sure how to find which pods have the issue and need updating.

@dottodot
Copy link
Author

Also when i turn on proxy protocol the logs suggest that not all requests have a broken header so how do I identify what's causing the broken header and fix it.

@MichaelJCole
Copy link

MichaelJCole commented Jun 15, 2019

I ran into this and at one point, I deleted the ingress service and recreated it and it worked. I got the broken headers issue when I had the nginx configmap set, but not the annotations on the ingress service that created the DO LB. Manually configuring "Proxy Protocol" on the LB w/ the Web UI didn't work for me.

Anyways, here is a config that worked for me:

mandatory.yaml

# Please note this file has been customized from the original at: 
# https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml  

#... stuff in the mandatory.yaml file that doesn't need to be customized.

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
# https://www.digitalocean.com/community/questions/how-to-set-up-nginx-ingress-for-load-balancers-with-proxy-protocol-support?answer=50244
data:
  use-forwarded-headers: "true"
  compute-full-forwarded-for: "true"
  use-proxy-protocol: "true"

# ... more stuff in the mandatory.yaml file that doesn't need to be customized.

cloud-generic.yaml

# Please note this file has been customized from the original at: 
# https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud-generic.yaml 

kind: Service
apiVersion: v1
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
  annotations:
    # https://github.com/digitalocean/digitalocean-cloud-controller-manager/blob/master/docs/controllers/services/annotations.md
    service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: "true"
    service.beta.kubernetes.io/do-loadbalancer-redirect-http-to-https: "true"
spec:
  externalTrafficPolicy: Local
  type: LoadBalancer
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
  ports:
    - name: https
      port: 443
      targetPort: https

ingress.yaml (I wanted a whitelist for only CloudFlare)

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: allup
  annotations:
    nginx.ingress.kubernetes.io/whitelist-source-range: "2400:cb00::/32, 2606:4700::/32, 2803:f800::/32, 2405:b500::/32, 2405:8100::/32, 2a06:98c0::/29, 2c0f:f248::/32, 173.245.48.0/20, 103.21.244.0/22, 103.22.200.0/22, 103.31.4.0/22, 141.101.64.0/18, 108.162.192.0/18, 190.93.240.0/20, 188.114.96.0/20, 197.234.240.0/22, 198.41.128.0/17, 162.158.0.0/15, 104.16.0.0/12, 172.64.0.0/13, 131.0.72.0/22"
spec:
  tls:
    - secretName: cloudflare-tls-cert
  rules:
    - host: example.com
      http:
        paths:
          - path: /
            backend:
              serviceName: app-www-service
              servicePort: http

@w1ndy
Copy link

w1ndy commented Jul 9, 2019

Is it possible to allow nginx to listen on the same http/https port with and without proxy protocol like this setup?

@aledbf
Copy link
Member

aledbf commented Jul 9, 2019

@w1ndy no. You need a custom template for that https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/custom-template/

@lingxiankong
Copy link

The same issue happens in Kubernetes cluster on top of OpenStack cloud (using openstack-cloud-controller-manager).

The Ingress service could be accessed from outside the cluster, but not from the cluster node or in the pod.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 4, 2019
@lingxiankong
Copy link

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 4, 2019
@davecranwell-vocovo
Copy link

davecranwell-vocovo commented Nov 22, 2019

I'm a bit unclear whether this issue is the same one we just encountered, but perhaps the following is helpful:

We have a microservice architecture, where services talk to each other via the axios library, all inside the same cluster. What we'd misconfigured was the URL by which the services talk to each other. We had one service talk to the other via the external dns record by which the target service was known e.g foo.domain.com, causing traffic for it to go all the way out and back into the cluster again. When nginx tried to handle the request, the header looked broken because the request wasn't preceded by the instruction PROXY TCP4 10.8.0.18 [TARGET_IP] 55966 443 (which is what happens when you curl --haproxy-protocol, and is what happens to all inbound traffic handled by the ingress controller when you enable the "Proxy Protocol" setting)

By changing the URL of the target service to the internal dns record e.g http://[container].[namespace].svc.cluster.local by which it was known, traffic was sent directly to the target, not back through the ingress controller.

@peteychuk
Copy link

Temporary solved it by using Cloudflare proxy mode for subdomains.
In this case, all traffic goes via Cloudflare proxy and works the same way for all traffic (Internal inside the cluster and external traffic).

Looking forward to the resolution of this issue.

@jbanety
Copy link

jbanety commented Dec 17, 2019

Hi,
A workaround is to add the service.beta.kubernetes.io/do-loadbalancer-hostname annotation to the service with a hostname pointing to the loadbalancer IP.

See https://github.com/digitalocean/digitalocean-cloud-controller-manager/blob/master/docs/controllers/services/annotations.md#servicebetakubernetesiodo-loadbalancer-hostname

kind: Service
apiVersion: v1
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
  annotations:
    service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: "true"
    service.beta.kubernetes.io/do-loadbalancer-hostname: "do-k8s.example.com"
spec:
  externalTrafficPolicy: Local
  type: LoadBalancer
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
  ports:
    - name: http
      port: 80
      targetPort: http
    - name: https
      port: 443
      targetPort: https

@dottodot
Copy link
Author

Thank you service.beta.kubernetes.io/do-loadbalancer-hostname seems to work. No more broken headers using proxy protocol

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 19, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 18, 2020
@Oznup
Copy link

Oznup commented Apr 26, 2020

Hello,
I have the same issue, but my cluster is self hosted, and my Load Balancer is MetalLB.
So the 2 headers added on the service have no effets.
Any idea on how to solve it ?

@jbanety
Copy link

jbanety commented Apr 27, 2020

Hi @Oznup,
Does your LoadBalancer have an External-IP ?
On bare-metal, my only workaround is to have my LoadBalancers with the "pending" message for working ingresses.
Maybe help : kubernetes/kubernetes#66607 (comment)

@buzypi
Copy link

buzypi commented May 9, 2020

For users of DigitalOcean's managed Kubernetes offering (DOKS) and possibly others: you are most likely running into a specific bypassing behavior of kube-proxy that causes requests to never leave the cluster but go straight to the internal pod. For receiving pods that expect a certain protocol to be implemented by the load balancer (like proxy protocol or TLS termination), this breaks work flows.

Pointers to the related upstream issue and workarounds are described in our CCM README. The tracking issue to address the problem long-term for DOKS is at digitalocean/DOKS#8.

This is precisely the problem we are facing and the workaround was to add the hostname as described. I can confirm that this also works with wildcard DNS matches.

@bukowa
Copy link

bukowa commented Aug 4, 2020

Workaround does not work for me.

@maitrungduc1410
Copy link

maitrungduc1410 commented Feb 21, 2021

adding service.beta.kubernetes.io/do-loadbalancer-hostname works for DO. But it's not clear what to do at the first time when reading DO docs (from here and here)

Basically it contains the following steps:

  1. First get nginx-ingress service LB External-IP
  2. Then you need to annotate nginx-ingress service with your domain name:
kubectl annotate service ingress-nginx-controller service.beta.kubernetes.io/do-loadbalancer-hostname=mydomain.com -n ingress-nginx --overwrite
  1. Then go to your domain provider and point your domain to its External IP, after that wait for few mins for changes to get updated and try access the domain, make sure it reaches to Ingress controller (usually show ingress status page, 404...)

After this you can create your Ingress resource and use Cert Manager to issue SSL certificate as usual, and when you get your ingress it should show ADDRESS which is your domain. Hope this help

@luccas-freitas
Copy link

Stupid check, but if you go to the load balancer's settings in the DO admin panel, it's enabled, right? Because if it's not while use-proxy-protocol = true you end up with the same kind of encoding mismatch. image

This solved my problem

@13567436138
Copy link

I face the same problem,how to solve?

@IakimLynnyk-TomTom
Copy link

I have the same issue with the option while integrating it with AKS services. Same issue, same error:
AKS v1.21.9
Ingress-nginx v1.3.1 (v1.1.1 the hook)
but somehow I get it working by using proxy-protocol: 'true' and it just works.

@goutham-sabapathy
Copy link

Hello,

I am running EKS with kubernetes nginx ingress on NLB with SSL termination but the proxy protocol annotations was not working.

controller:
  electionID: external-ingress-controller-leader
  ingressClassResource:
    name: external-nginx
    enabled: true
    default: false
    controllerValue: "k8s.io/external-ingress-nginx"
  config: 
    use-proxy-protocol": "true" 
  service:
    annotations:
      service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "60"
      service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
      service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: "environment=dev,owner=platform-engineering"
      service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-west-2:<REDACTED>:certificate/<REDACTED>
      service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
      service.beta.kubernetes.io/aws-load-balancer-type: nlb
      service.beta.kubernetes.io/aws-load-balancer-subnets: subnet-<REDACTED>, subnet-<REDACTED>
      
      # below annotations were not working
      # service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
      # service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: "preserve_client_ip.enabled=false,proxy_protocol_v2.enabled=true"

But once the NLB and ingress objects are created, the services wont be reachable because the proxy protocol on the NLB target groups was not turned on automatically even with the provided annotations. I got the below error in the controller pod logs,

2022/11/03 20:50:23 [error] 32#32: *26209 broken header: "GET /index.html HTTP/1.1
Host: nginx.<REDACTED>
Connection: keep-alive
Cache-Control: max-age=0
sec-ch-ua: "Google Chrome";v="107", "Chromium";v="107", "Not=A?Brand";v="24"
sec-ch-ua-mobile: ?0
sec-ch-ua-platform: "macOS"
DNT: 1
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9
Sec-Fetch-Site: cross-site
Sec-Fetch-Mode: navigate
Sec-Fetch-User: ?1
Sec-Fetch-Dest: document
Accept-Encoding: gzip, deflate, br
Accept-Language: en-US,en;q=0.9,ta;q=0.8

" while reading PROXY protocol, client: <REDACTED>, server: 0.0.0.0:80

Once I turned them on manually through console (NLB -> Listeners -> Target Group attributes), it works (I get 200 response in the same controller logs).

image

But both the annotations in the helm values does not seem to work.

Can anyone help on this ?

@khemrajd
Copy link

khemrajd commented Jun 5, 2023

However, accessing the host from within the cluster seems to fail (i.e. a server side request from one pod to another pod using the host)

@tlaverdure that's because you are no specifying the flag --haproxy-protocol in the curl command. If you enable proxy protocol in the ingress controller you need to decode it.

Appending '--haproxy-protocol' to curl commands works

@MohammedNoureldin
Copy link

MohammedNoureldin commented Aug 12, 2023

I still have the same issue but on bare-metal, and I am not able to resolve it. Though, I have a question. Why does everybody use the externalpolicy local?

This policy preserves the source IP anyway. What is the benifit of Proxy Protocol if I am going to use the local policy?

@Kleinkind
Copy link

For anyone stumbling across this using Hetzner LoadBalancers:
You propably need to add "load-balancer.hetzner.cloud/uses-proxyprotocol": 'true' as an annotation to the ingress service (at the same place you are likely already setting "load-balancer.hetzner.cloud/location" and/or "load-balancer.hetzner.cloud/type")

@adam-woj-ins
Copy link

adam-woj-ins commented Jun 3, 2024

In our case the problem was that we tried to deploy two environments on nginx (backend) port 80 - one basic and one going through proxy:

server
{
    listen 80;
    server_name test.com;
	.
	.
	.
}
server
{
      listen 80 proxy_protocol;
      server_name production.com;
        .
        .
        .
}

We thought that traffic from first LB configured to use proxy just reach block "listen 80 proxy_protocol" while traffic from second LB just reach block "listen 80;", but it won't happen due to error "broken header: ... while reading PROXY protocol, client: ...". Nginx does not allow configuring one port in these two ways and crashes on searching right server block, even if the correct one exists. One of solutions is to deploy one of the environments on another port - this works fine.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

Successfully merging a pull request may close this issue.