Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

security: SSH HostKeyCallback / ssh.InsecureIgnoreHostKey #4352

Closed
philpennock opened this issue Jan 29, 2018 · 11 comments
Closed

security: SSH HostKeyCallback / ssh.InsecureIgnoreHostKey #4352

philpennock opened this issue Jan 29, 2018 · 11 comments
Assignees
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@philpennock
Copy link

  1. What kops version are you running? The command kops version, will display
    this information.

Code examination of HEAD.

  1. What Kubernetes version are you running?

N/A

  1. What cloud provider are you using?

N/A

  1. What commands did you run? What is the simplest way to reproduce this issue?

fgrep -r InsecureIgnoreHostKey kops

  1. What happened after the commands executed?

I got a match and investigated to confirm that it was not a test-case, but code intended for production.

  1. What did you expect to happen?

No matches.

  1. Please provide your cluster manifest.

N/A

  1. Anything else do we need to know?

PR #4193 added kops toolbox bundle, which is an initial implementation not yet in a release AFAICS. Code review did not highlight the introduction of the line:

nodeSSH.SSHConfig.HostKeyCallback = ssh.InsecureIgnoreHostKey()

To be clear: this is outright rejecting any kind of hostkey verification, with no warning, no prompting, or anything else. Enrolling a bare-metal machine is therefore subject to unwarned man-in-the-middle attack. It would be awesome to not regress like this.

@chrislovecnm
Copy link
Contributor

chrislovecnm commented Feb 8, 2018

@philpennock kops toolbox bundle is actually for bare metal provisioning, which is a work in progress. So this will only impact bare metal clusters.

Appreciate you keeping us honest, any recommendation for handling ssh securely? We need it to be programmatic without human interaction.

/assign @justinsb

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 9, 2018
@philpennock
Copy link
Author

I apologize for not answering re the request for recommendations. Really, it depends upon what the bare metal provisioning does and whether or not, and how, you capture console logs.

The documentation on baremetal is not surfacing with some searches, so it's not clear if this is a mode of "someone ran stuff on a new box via a USB stick and we have no choices" or "TPM is required and there's a service a box can post a TPM-managed-key signed statement to" or anything else. The kubernetes/enhancements#360 feature is missing a link to a design doc.

Ultimately, this needs to be thought about, documented, and if there has to be a mode of "we have nothing to verify" then a flag --ssh-tofu or --ssh-leap-of-faith or the like, with the fingerprint information of the blindly accepted key being printed and logged, so that it at least becomes a part of the audit trail.

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 11, 2018
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 9, 2018
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Sep 9, 2018
@justinsb
Copy link
Member

This isn't in a release, and we'll probably tackle it as part of the machines api / cluster api.

But it's definitely still an open issue.

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Sep 10, 2018
@justinsb
Copy link
Member

Sorry, by not in a release I mean that it shouldn't be part of anyone's workflow. I'm sure the code is present in our released binaries.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 9, 2018
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 8, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

5 participants