-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Vault 1.4.0 won't start when a seal stanza is added in aws eks #8844
Comments
@corbesero thanks for opening a separate issue to follow up on this. We've done some initial investigation on our end, and believe that it might be an issue with the instance profile not being detected correctly. Can you create IAM Access Keys (with the proper permissions), and provide them directly to test things out?
Side note: If you're doing this on a test environment, make sure to delete to volumes that were created on the last attempt (via |
I did that. I created a new AWS key pair with the policy to allow kms access, and vault did come up. I did not unseal it, but I saw log messages. This is not exactly an identical test, since I didn't go through the step of first letting it come up w/o the awskms seal. I will try that on Monday. But, this does imply the vault is not happy depending on the profile of the instance role or pod OIDC from the service account. We were really expecting to be able to use that feature since our other EKS services use that mechanism. |
I can confirm my original scenario. I have an AWS access/secret key pair set in the helm and the secret in the namespace. If I create a vault without the unseal stanza, it can start up. If I then add the seal stanza, the new containers do come up. I as able to do the migrate, and afterwards the containers did seem to do the auto unseal correctly. This seems to strongly imply that vault is having a problem coming up when depending on the instance or service account profile instead of an explicit AWS key configuration. |
Thanks for doing the setup to verify things! I don't want to draw conclusions yet, but it may be related to #8847 (also an issue with instance profile metadata not being picked up). |
@calvn I think it is the same issue. I noticed #8847 recently too. When we were installing vault 1.3, the pods were only getting the instance profile of the worker node, not the role specified in the service account via the OIDC. Our Switching to 1.4 just didn't expose the underlying profile issue until I added the aws unseal, which completely broke the instance profile being used. |
I opened #8847. I am also using an AWS KMS seal with 1.4.0, and vault successfully uses the ECS task role and not the EC2 instance profile to acquire AWS credentials for using KMS to unseal. Even so, credential acquisition is not working for the AWS auth backend. |
Any progress here? We have the same issue on AWS EKS 1.15 with OIDC mapped to serviceaccount. Creating new vault 1.4.1 from scratch, deploying from fresh git helm chart. SA is annotated according to EKS documentation, pods have AWS_ROLE_ARN and AWS_WEB_IDENTITY_TOKEN_FILE environment variables set, tokens are mounted, access to EC2 instance profile is disabled (drop any packets to 169.254.169.254). |
My comment on the linked issue might apply here too: #8847 (comment) |
I'm seeing pretty much the same problem, but on a manually created Kubernetes cluster (not EKS, kops or anything, just plain EC2 instances with kubeadm). As soon as I add the awskms seal Vault starts but does not output any logs (regardless of log level) nor does it open the 8200 port. I've tried attaching a completely wide open IAM role to the EC2 instance as well as using a secret key/access key pair for a role which is constrained to just the KMS operations listed in the docs. I verified these keys work with KMS operations when used with the aws cli. Kubernetes version: 1.18.2 |
I believe #7738 fixes this |
@chancez I just tried the new 1.4.3 vault image and it doesn't appear to have fixed the issue. I'm still seeing the exact same symptoms as I described above. I'm also using version 0.6.0 of the Helm chart. |
I had to set |
Hi @corbesero, have you had a chance to try vault 1.5.0 to see if that resolves the issue? Or 1.4.3 w/AWS_ROLE_SESSION_NAME set? |
Closing for now. |
Describe the bug
When I add a seal stanza (awskms) to a vault configuration via the vault-helm chart (0.5.0), the vault does not become available in the containers.
To Reproduce
Expected behavior
The vaults should come up so that I can do the unseal migrate.
Environment:
Vault 1.4.0
AWS EKS 1.15
vault-helm chart at tag 0.5.0
Vault server configuration file(s):
Also, see attached values file for helm
Additional context
No log output is produced.
This is the output of a ps on the container
I have attached the output of
kubectl describe pods -n vault
for the vault-0 and vault-1 pods. The vault-0 is when the pod is still there, but the vault-1 sows the output after a while when the container completely fails.If I comment out the seal stanza, I can do a helm upgrade, delete the pods, and the new ones come up and can be unsealed.
kubectl-describe-pod-vault-0.txt
kubectl-describe-pod-vault-1.txt
values.yaml.txt
The text was updated successfully, but these errors were encountered: