Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error 500: The server asked for credentials #374

Closed
Smana opened this issue Feb 14, 2016 · 70 comments
Closed

Error 500: The server asked for credentials #374

Smana opened this issue Feb 14, 2016 · 70 comments

Comments

@Smana
Copy link

Smana commented Feb 14, 2016

Hello,

I'm trying to test the dashboard but i get the following errors and i can't access to the UI

2016/02/14 15:46:15 Getting list of all replication controllers in the cluster
2016/02/14 15:46:15 the server has asked for the client to provide credentials (get replicationControllers)
2016/02/14 15:46:15 Outcoming response to 62.210.220.xx:56978 with 500 status code

Could you please guide me to solve this issue ?

Regards,
Smana

@bryk
Copy link
Contributor

bryk commented Feb 15, 2016

How did you start the UI and Kubernetes cluster? What is your Kubernetes version?

@Smana
Copy link
Author

Smana commented Feb 15, 2016

Hi @bryk

I've just run kubectl create -f src/deploy/kubernetes-dashboard.yaml
I'm running kubernetes version 1.1.4

Thank you,

@bryk
Copy link
Contributor

bryk commented Feb 15, 2016

Looks like a problem with credentials. Is your apiserver protected by some security mechanisms?

cc @floreks @maciaszczykm @cheld Have you ever seen this problem?

@Smana
Copy link
Author

Smana commented Feb 15, 2016

The apiserver, just listens on https with basic authentication.

@theobolo
Copy link

Hello guys i've got the same problem here.
I use a Azure Kubernetes cluster deployed with the getting-started guide (Azure, CoreOS, Kube, Weave).
I use the 1.1.7 Kubernetes Version.

I run the same command as @Smana : kubectl create -f src/deploy/kubernetes-dashboard.yaml

But Heapster need the CA.cert that should be inside the serviceaccount folder.
Problem, in the azure cloud config the CA.cert is not implemented in the kube-controller service.

issue

So do you have some instructions to implement the CA.cert in the coreos cloud config ?
Or something else ?

Thanks

@Smana
Copy link
Author

Smana commented Feb 15, 2016

let me know if you need further info :)

@bryk
Copy link
Contributor

bryk commented Feb 15, 2016

@theobolo @Smana I've just pushed new testing image of the Dashboard UI. It includes certificates setup:

RUN apk --update add ca-certificates
RUN for cert in `ls -1 /etc/ssl/certs/*.crt | grep -v /etc/ssl/certs/ca-certificates.crt`; do cat "$cert" >> /etc/ssl/certs/ca-certificates.crt; done

Can you delete old Dashboard replication controller and recreate it? Please tell me whether this helps.

@Smana
Copy link
Author

Smana commented Feb 15, 2016

@bryk thank you, unfortunately i still get the same error:

2016/02/15 13:27:59 Incoming HTTP/1.1 GET /api/v1/replicationcontrollers request from 62.210.220.66:47908
2016/02/15 13:27:59 Getting list of all replication controllers in the cluster
2016/02/15 13:27:59 the server has asked for the client to provide credentials (get replicationControllers)
2016/02/15 13:27:59 Outcoming response to 62.210.220.xx:47908 with 500 status code

@Smana
Copy link
Author

Smana commented Feb 15, 2016

My apiserver is reachable with the following command, i don't know if that can help
curl --cacert /etc/kubernetes/ssl/ca.pem -u kube:xxxxxxxxxxx https://62.210.220.xx:8443

@floreks
Copy link
Member

floreks commented Feb 15, 2016

@Smana

Maybe our backend is connecting to master at different port where only basic authentication is available.

This looks like a similar issue:
kubernetes/kubernetes#7622 (comment)

@bryk what do you think? Is this even possible for in-cluster configuration?

@cheld
Copy link
Contributor

cheld commented Feb 15, 2016

Dashboard relies on serviceaccount for the authentication to the api server.

@bryk bryk modified the milestone: v1.0 Feb 15, 2016
@theobolo
Copy link

@bryk Even with the new image i've got an error

Get https://10.16.0.1:443/api/v1/replicationcontrollers: read tcp 172.17.0.3:45116->10.16.0.1:443: read: connection reset by peer

In my kubernetes cluster the Kube API is available at 172.18.0.12:8080 or 172.18.0.12:8443 but if i try a curl at

http://10.16.0.1:80 or https://10.16.0.1:443 nothing happens.

And i still have the ca.cert error :/

to be clear : i'm running on CoreOS and in my cloud config the root-ca-cert option is not defined and the ca.cert is not created during the deployement. That's should be the problem no ?

@Smana
Copy link
Author

Smana commented Feb 23, 2016

Hello, any news on this please ?

@bryk
Copy link
Contributor

bryk commented Feb 23, 2016

@luxas Can you help?

@bryk
Copy link
Contributor

bryk commented Feb 23, 2016

@Smana We've just done a new canary and versioned release. Client library was updated. Can you check once more on src/deploy/kubernetes-dashboard-canary.yaml?

@Smana
Copy link
Author

Smana commented Feb 23, 2016

@bryk unforntunately i'm still getting the same error

2016/02/23 16:20:52 Starting HTTP server on port 9090
2016/02/23 16:20:52 Creating API server client for https://10.233.0.1:443
2016/02/23 16:20:52 Creating in-cluster Heapster client
2016/02/23 16:23:10 Incoming HTTP/1.1 GET /api/v1/replicationcontrollers request from 62.210.220.66:58166
2016/02/23 16:23:10 Getting list of all replication controllers in the cluster
2016/02/23 16:23:10 the server has asked for the client to provide credentials (get replicationControllers)
2016/02/23 16:23:10 Outcoming response to 62.210.220.xx:58166 with 500 status code

@luxas
Copy link
Member

luxas commented Feb 23, 2016

Sure!
@theobolo Try this:
Append

--admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota

to kube-apiserver.service
Append

--service-account-private-key-file=/var/run/kubernetes/apiserver.key --root-ca-file=/var/run/kubernetes/apiserver.crt

to kube-controller-manager.service
This will probably fix your issue. (BTW, I haven't used k8s on azure, but read the source now, and this will probably help)

The two issues are different, @theobolo´s is that serviceAccounts aren't created for the default namespace for connecting to apiserver. The controller-manager code above fixes that. Then there's a ServiceAccountController too, and you have to enable that one also. It takes a normal pod, and injects the ca.crt and token files into /var/run/secrets/kubernetes.io/serviceaccount/

@luxas
Copy link
Member

luxas commented Feb 23, 2016

@Smana Are you running on bare-metal, some cloud provider or a custom config?
We have to know that to be able to help.

@Smana
Copy link
Author

Smana commented Feb 23, 2016

I'm running on a virtual machine, (OS fedora), this vm is not running on a cloud provider.
The only difference i can see with a "standart" installation is the tcp port for https.
In my case the apiserver's listenning on 8443.

Fedora 23
kubernetes 1.1.7
network plugin calico

deployed with http://kubespray.io

@Smana
Copy link
Author

Smana commented Feb 23, 2016

@luxas Let me know if you need further info

@luxas
Copy link
Member

luxas commented Feb 23, 2016

What does kubectl get secrets and kubectl get {some_pod} -o yaml output?

@Smana
Copy link
Author

Smana commented Feb 23, 2016

kubectl get secrets --all-namespaces 
NAMESPACE     NAME                  TYPE                                  DATA      AGE
ci            default-token-zqvfu   kubernetes.io/service-account-token   2         10d
default       default-token-2zk2l   kubernetes.io/service-account-token   2         11d
kube-system   default-token-i5t58   kubernetes.io/service-account-token   2         11d
web           default-token-2l89f   kubernetes.io/service-account-token   2         6d
{
    "kind": "Pod",
    "apiVersion": "v1",
    "metadata": {
        "name": "kubernetes-dashboard-canary-sr219",
        "generateName": "kubernetes-dashboard-canary-",
        "namespace": "kube-system",
        "selfLink": "/api/v1/namespaces/kube-system/pods/kubernetes-dashboard-canary-sr219",
        "uid": "6472ffee-da49-11e5-af9b-0cc47a0db68e",
        "resourceVersion": "598820",
        "creationTimestamp": "2016-02-23T16:20:45Z",
        "labels": {
            "app": "kubernetes-dashboard-canary",
            "version": "canary"
        },
        "annotations": {
            "kubernetes.io/created-by": "{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"kube-system\",\"name\":\"kubernetes-dashboard-canary\",\"uid\":\"647173a1-da49-11e5-af9b-0cc47a0db68e\",\"apiVersion\":\"v1\",\"resourceVersion\":\"598793\"}}\n"
        }
    },
    "spec": {
        "volumes": [
            {
                "name": "default-token-i5t58",
                "secret": {
                    "secretName": "default-token-i5t58"
                }
            }
        ],
        "containers": [
            {
                "name": "kubernetes-dashboard-canary",
                "image": "gcr.io/google_containers/kubernetes-dashboard-amd64:canary",
                "ports": [
                    {
                        "containerPort": 9090,
                        "protocol": "TCP"
                    }
                ],
                "resources": {},
                "volumeMounts": [
                    {
                        "name": "default-token-i5t58",
                        "readOnly": true,
                        "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
                    }
                ],
                "livenessProbe": {
                    "httpGet": {
                        "path": "/",
                        "port": 9090,
                        "scheme": "HTTP"
                    },
                    "initialDelaySeconds": 30,
                    "timeoutSeconds": 30
                },
                "terminationMessagePath": "/dev/termination-log",
                "imagePullPolicy": "Always"
            }
        ],
        "restartPolicy": "Always",
        "terminationGracePeriodSeconds": 30,
        "dnsPolicy": "ClusterFirst",
        "serviceAccountName": "default",
        "serviceAccount": "default",
        "nodeName": "node1"
    },
    "status": {
        "phase": "Running",
        "conditions": [
            {
                "type": "Ready",
                "status": "True",
                "lastProbeTime": null,
                "lastTransitionTime": null
            }
        ],
        "hostIP": "62.210.220.xx",
        "podIP": "10.233.64.28",
        "startTime": "2016-02-23T16:20:45Z",
        "containerStatuses": [
            {
                "name": "kubernetes-dashboard-canary",
                "state": {
                    "running": {
                        "startedAt": "2016-02-23T16:20:52Z"
                    }
                },
                "lastState": {},
                "ready": true,
                "restartCount": 0,
                "image": "gcr.io/google_containers/kubernetes-dashboard-amd64:canary",
                "imageID": "docker://e63249efb9e297f63187ab8534051391aefe3dcf1116be0c493b8bcdb5b419a5",
                "containerID": "docker://c0d402a9e5b62e748bd1fb642817b43be9734f800e5f3c66b0c8d063a912677c"
            }
        ]
    }
}

@luxas
Copy link
Member

luxas commented Feb 23, 2016

Can you

kubectl get svc
NAME          CLUSTER_IP   EXTERNAL_IP    PORT
kubernetes {CLUSTER_IP}  not important      {PORT}

kubectl exec -it po {some_other_pod_than_dashboard} /bin/bash
> curl -k {CLUSTER_IP}:{PORT}
> curl --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt {CLUSTER_IP}:{PORT}

@Smana
Copy link
Author

Smana commented Feb 23, 2016

@luxas That works as expected

kubectl exec -ti test-tiorn -- /bin/bash
root@test-tiorn:/# curl -k https://10.233.0.1                                  
Unauthorized

root@test-tiorn:/# curl -k -u kube:changeme --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt https://10.233.0.1
{
  "paths": [
    "/api",
    "/api/v1",
    "/apis",
    "/apis/extensions",
    "/apis/extensions/v1beta1",
    "/healthz",
    "/healthz/ping",
    "/logs/",
    "/metrics",
    "/resetMetrics",
    "/swagger-ui/",
    "/swaggerapi/",
    "/ui/",
    "/version"
  ]

@Smana
Copy link
Author

Smana commented Feb 23, 2016

I forgot to mention that i'm using the nodePort

@theobolo
Copy link

@luxas Nice it worked ! So now i've the same problem as above.

I've an error :

2016/02/24 00:36:31 Starting HTTP server on port 9090

2016/02/24 00:36:31 Creating API server client for https://10.16.0.1:443

2016/02/24 00:36:31 Creating in-cluster Heapster client

2016/02/24 00:39:12 Incoming HTTP/1.1 GET /api/v1/replicationcontrollers request from 10.32.0.1:55580

2016/02/24 00:39:12 Getting list of all replication controllers in the cluster

2016/02/24 00:39:16 Get https://10.16.0.1:443/api/v1/replicationcontrollers: read tcp 172.17.0.5:46236->10.16.0.1:443: read: connection reset by peer

2016/02/24 00:39:16 Outcoming response to 10.32.0.1:55580 with 500 status code

With node port also.

@luxas
Copy link
Member

luxas commented Feb 24, 2016

Can you test without nodeport and access dashboard via
http://[master-ip]:8080/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard?

@Smana
Copy link
Author

Smana commented Feb 24, 2016

http://62.210.220.xx:8080/api/v1/proxy/namespaces/kube-system/dashboard-canary

kubectl get svc --namespace=kube-system
NAME               CLUSTER_IP      EXTERNAL_IP   PORT(S)   SELECTOR                          AGE
dashboard-canary   10.233.48.248   nodes         80/TCP    app=kubernetes-dashboard-canary   17h
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {},
  "status": "Failure",
  "message": "the server could not find the requested resource",
  "reason": "NotFound",
  "details": {},
  "code": 404
}
kubectl get endpoints --namespace=kube-system
NAME               ENDPOINTS           AGE
dashboard-canary   10.233.64.28:9090   17h

strange... still digging

@cescoferraro
Copy link

kubectl get secrets does not return anything.

cesco@desktop: ~/code/go/src/bitbucket.org/cescoferraro/cluster/terraform on master [+!?]
$ kubectl get secrets
cesco@desktop: ~/code/go/src/bitbucket.org/cescoferraro/cluster/terraform on master [+!?]
$ kubectl get svc
NAME         CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   10.100.0.1   <none>        443/TCP   18m

@antoineco
Copy link

Have you tried using the recommended admission control plug-ins?

@cescoferraro
Copy link

Thats what I am using. I am starting the api-server with this

ExecStart=/opt/bin/kube-apiserver \
                          --admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota \
                          --logtostderr=true  \
                          --insecure-bind-address=${MASTER_PRIVATE} \
                          --insecure-port=8080  \
                          --bind-address=0.0.0.0  \
                          --secure-port=6443  \
                          --runtime-config=api/v1 \
                          --allow-privileged=true \
                          --service-cluster-ip-range=10.100.0.0/16 \
                          --advertise-address=${MASTER_PUBLIC} \
                          --token-auth-file=/data/kubernetes/token.csv \
                          --etcd-cafile=/home/core/ssl/ca.pem   \
                          --etcd-certfile=/home/core/ssl/etcd1.pem  \
                          --etcd-keyfile=/home/core/ssl/etcd1-key.pem \
                          --etcd-servers=https://${MASTER_PRIVATE}:2379,https://${DATABASE_PRIVATE}:2379 \
                          --cert-dir=/home/core/ssl \
                          --client-ca-file=/home/core/ssl/ca.pem \
                          --tls-cert-file=/home/core/ssl/kubelet.pem \
                          --tls-private-key-file=/home/core/ssl/kubelet-key.pem \
                          --kubelet-certificate-authority=/home/core/ssl/ca.pem \
                          --kubelet-client-certificate=/home/core/ssl/kubelet.pem \
                          --kubelet-client-key=/home/core/ssl/kubelet-key.pem \
                          --kubelet-https=true

@antoineco
Copy link

--tls-cert-file=/home/core/ssl/kubelet.pem

You're using the kubelet cert for you API server, but is it a Server certificate with the correct SAN for your server common name and IP?

If yes, with the following controller manager flags you should be good to go :

--root-ca-file=/home/core/ssl/ca.pem
--service-account-private-key-file=/home/core/ssl/kubelet-key.pem

@cescoferraro
Copy link

Thats what I am doing, I am using the same self-signed certificate I created for etcd2 with this etcd recomended way and made sure to add all private and public ip to its configuration option. I am not sure about what you mean by server common names.

ExecStart=/opt/bin/kube-controller-manager \
                              --address=0.0.0.0 \
                              --master=https://${COREOS_PRIVATE_IPV4}:6443 \
                              --logtostderr=true \
                              --kubeconfig=/home/core/.kube/config  \
                              --cluster-cidr=10.132.0.0/16 \
                              --register-retry-count 100  \
                              --root-ca-file=/home/core/ssl/ca.pem \
                              --service-account-private-key-file=/home/core/ssl/kubelet-key.pem

@cescoferraro
Copy link

I was having trouble with my certificates. All solved now. Now I do not need to pass the apiserver-host flag, but it does not ask for my api simple authentication.Which it was what I was expecting.

If I explicitly provide the flag, the connection to the api fail with the no root error. Which is weird because all my pods got the /var/run/secrets/kubernetes.io mounted as expected

@bryk
Copy link
Contributor

bryk commented Mar 29, 2016

I'm closing this bug. Please continue discussion if needed.

@bryk bryk closed this as completed Mar 29, 2016
k8s-github-robot pushed a commit to kubernetes/kubernetes that referenced this issue Apr 1, 2016
Automatic merge from submit-queue

Fix so setup-files don't recreate/invalidate certificates that already exist

Fixes: #23197 and a lot of other DNS and dashboard issues

This is quite critical for `docker`-based users and should be considered as a **cherrypick-candidate** as it makes a lot of people wonder why Dashboard and/or DNS doesn't work. Example: kubernetes/dashboard#374

Earlier when you shut your `docker.md` cluster down and started it again, all ServiceAccounts became invalidated by `setup-files` that happily ran once again and replaced all files. That made `apiserver` and `controller-manager` pick up the new certs (or there was a race condition, they _could_ have picked up the old certs too, but that's unlikely) and the old certs were put into `/var/run/secrets` because the ServiceAccount's Secrets were stored in etcd, which `setup-files` didn't touch.

@fgrzadkowski @huggsboson @thockin @mikedanese @vishh @pwittrock @eparis @bgrant0607
@hanpenghero
Copy link

@bryk I encountered the same issue as well in 1.2 , and based on the discussion above I delete the secrets related the namespace "kube-system" , but now I can not create the dashboard again . the error is as following . So do you know where I can find the guide to figure it out . thx

FailedCreate Error creating: Pod "kubernetes-dashboard-" is forbidden: no API token found for service account kube-system/default, retry after the token is automatically created and added to the service account

@bryk
Copy link
Contributor

bryk commented Apr 25, 2016

and based on the discussion above I delete the secrets related the namespace "kube-system" , but now I can not create the dashboard again

You need to recreate the secrets now (and possibly the service accounts). Refer to the documentation to learn how.

@Bregor
Copy link

Bregor commented Jun 14, 2016

I read all comments, but still don't understand: is it possible to use ssl-keys auth for dashboard to access an apiserver?

@Bregor
Copy link

Bregor commented Jun 14, 2016

Apiserver run with following:

- --secure-port=8443
- --insecure-bind-address=127.0.0.1
- --insecure-port=8080
- --admission-control=NamespaceLifecycle,LimitRanger,ResourceQuota
- --runtime-config=extensions/v1beta1=true,extensions/v1beta1/thirdpartyresources=true
- --tls_cert_file=/etc/kubernetes/ssl/apiserver.pem
- --tls_private_key_file=/etc/kubernetes/ssl/apiserver-key.pem
- --client_ca_file=/etc/kubernetes/ssl/ca.pem
- --service_account_key_file=/etc/kubernetes/ssl/apiserver-key.pem

Other components like controller-manager use key/cert auth:

- --kubeconfig=/etc/kubernetes/kubeconfig.yaml

In kubeconfig there are following:

clusters:
- cluster:
    certificate-authority: /etc/kubernetes/ssl/ca.pem
    server: https://10.83.8.197:8443
  name: bots
users:
- name: kubelet
  user:
    client-certificate: /etc/kubernetes/ssl/node.pem
    client-key: /etc/kubernetes/ssl/node-key.pem
...

Is there any way to achieve this for dashboard?

@bryk
Copy link
Contributor

bryk commented Jun 15, 2016

You can use kubeconfig files with 1.1.0-beta2 version or 1.1 (to be released in 2 weeks). All you need to do is to specify KUBECONFIG env var and point it to the file.

@Bregor
Copy link

Bregor commented Jun 15, 2016

@bryk thank you!
Works like a charm.

@bryk
Copy link
Contributor

bryk commented Jun 15, 2016

@Bregor We actually should have a command line option for this. Can you check whether --kubeconfig option works? If not, it is super easy to add it.

@Bregor
Copy link

Bregor commented Jun 15, 2016

@bryk

$ docker run -it --rm gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.0-beta3 --kubeconfig=/blabla
unknown flag: --kubeconfig
Usage of /dashboard:
...

@theobolo
Copy link

@Bregor Docker run ? Not Kubectl run either ?

@Bregor
Copy link

Bregor commented Jun 15, 2016

@theobolo is there any difference? Kubelet will use this very container anyway.

@theobolo
Copy link

Yep but not sure about that :/ @bryk can you confirm ?

@bryk
Copy link
Contributor

bryk commented Jun 15, 2016

Should be no difference. You should use kubelet/docker directly only for testing. In real environment deploy this as a pod to your cluster.

@Bregor
Copy link

Bregor commented Jun 15, 2016

Exactly.
In current manifest (I use kind: Deployment) there is following:

...
    spec:
      containers:
      - name: kubernetes-dashboard
    command:
      - /dashboard
      - --apiserver-host=https://kubernetes.default.svc.kubernetes.local:8443
    image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.0-beta3
    imagePullPolicy: Always
...    
    env:
    - name: KUBECONFIG
      value: "/etc/kubernetes/kubeconfig.yaml"
...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests