Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

unable to authenticate to kube-apiserver from trusted cluster #3063

Closed
drewwells opened this issue Oct 7, 2019 · 6 comments
Closed

unable to authenticate to kube-apiserver from trusted cluster #3063

drewwells opened this issue Oct 7, 2019 · 6 comments

Comments

@drewwells
Copy link

Have a question? Please use Our Forum

What happened:
login to trusted cluster
able to successfully ssh to nodes in trusted cluster
unable to use kubectl to authenticate to trusted cluster kube-apiserver. It appears that the credentials being used are invalid for the kube-apiserver

[kube-apiserver-ip-172-19-76-228.ec2.internal] I1007 23:00:48.393719       1 log.go:172] http: TLS handshake error from 172.19.77.157:42752: remote error: tls: bad certificate

From main auth-service

Error forwarding to https://100.64.0.1:443/version?timeout=32s, err: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes") forward/fwd.go:18

What you expected to happen:
kubectl would successfully forward through auth-server main to proxy east and on to kube-apiserver.

How to reproduce it (as minimally and precisely as possible):
deploy the helm chart to two clusters. There's several things that need to be changed, most notably. --advertise-ip needs to be set to the address where proxy is running (pod IP).

Environment:

  • Teleport version (use teleport version): v4.1.0 v4.2.0-alpha.1
  • Tsh version (use tsh version): v4.1.0
  • OS (e.g. from /etc/os-release): container (debian)

Relevant Debug Logs If Applicable

If you specify kubeconfigs for both clusters, you will notice that the main auth_service opens a tunnel to east auth_service and then requests its own kube-apiserver. It appears that instead of requesting east open a connection to kube-apiserver that it's using its own kube-apiserver credentials to make the connection which is incorrect.

This issue is masked when using in-cluster config as both connections are made to kubernetes.default.svc so this bug becomes less obvious.

@drewwells
Copy link
Author

I went through all the data in dynamodb and don't see either kube-apiserver address in there. How does teleport know where to forward the kube-apiserver requests to?

@tarrall
Copy link

tarrall commented Oct 14, 2019

Not positive but I think they may have fixed this in 4.1.1. We had a similar issue, though our main teleport cluster isn't running on k8s like yours is.

I believe that the kube-apiserver address and CA come from the kubernetes client when you're running on a k8s cluster and not specifically supplying them via config. I.e. third party code, not in the teleport repo...

@webvictim
Copy link
Contributor

Yes, a bug fix was made in to the Kubernetes forwarding in Teleport 4.1.1 which should fix this issue. Please upgrade and let us know how you get on.

@webvictim
Copy link
Contributor

@drewwells Did you test a version of Teleport >4.1.0?

@drewwells
Copy link
Author

I have not tested this, not getting emails from this issue for some reason. I'll give it a shot

@drewwells
Copy link
Author

Fixed in #3070

4.1.1 release

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants