Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MountVolume.SetUp failed for volume "efs-pv" #192

Closed
nomopo45 opened this issue Jun 22, 2020 · 34 comments
Closed

MountVolume.SetUp failed for volume "efs-pv" #192

nomopo45 opened this issue Jun 22, 2020 · 34 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@nomopo45
Copy link

nomopo45 commented Jun 22, 2020

/kind bug

Hello,

I followed this documentation : https://docs.aws.amazon.com/eks/latest/userguide/efs-csi.html, as written in the userguide i'm trying to use the Multiple Pods Read Write Many

I have an EKS cluster, and i have an issue with my pod creation, apparently he is unable to mount the EFS Volume.

here is some log that i found :

MountVolume.SetUp failed for volume "efs-pv" : kubernetes.io/csi: mounter.SetupAt failed: rpc error: code = Internal desc = Could not mount "fs-xxxxxxx:/" at "/var/lib/kubelet/pods/eec7379e-0d59-440d-87c3-050c59c27b45/volumes/kubernetes.io~csi/efs-pv/mount": mount failed: exit status 1 Mounting command: mount Mounting arguments: -t efs fs-xxxxxxx:/ /var/lib/kubelet/pods/eec7379e-0d59-440d-87c3-050c59c27b45/volumes/kubernetes.io~csi/efs-pv/mount Output: Traceback (most recent call last): File "/sbin/mount.efs", line 1375, in <module> main() File "/sbin/mount.efs", line 1355, in main bootstrap_logging(config) File "/sbin/mount.efs", line 1031, in bootstrap_logging raw_level = config.get(CONFIG_SECTION, 'logging_level') File "/lib64/python2.7/ConfigParser.py", line 607, in get raise NoSectionError(section) ConfigParser.NoSectionError: No section: 'mount'

and

Unable to attach or mount volumes: unmounted volumes=[persistent-storage], unattached volumes=[persistent-storage default-token-xvnhr]: timed out waiting for the condition

I checked many things, my SG are fully open in and out, my pv.yaml, pod1.yaml, classtorage.yml and claim.yaml are exactly the same as here https://github.com/kubernetes-sigs/aws-efs-csi-driver/tree/master/examples/kubernetes/multiple_pods

Environment*

  • Client Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.6-beta.0", GitCommit:"e7f962ba86f4ce7033828210ca3556393c377bcc", GitTreeState:"clean", BuildDate:"2020-01-15T08:26:26Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"darwin/amd64"}
    Server Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.8-eks-e16311", GitCommit:"e163110a04dcb2f39c3325af96d019b4925419eb", GitTreeState:"clean", BuildDate:"2020-03-27T22:37:12Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}

Driver version: I guess latest, i installed it today following the doc

If you have any idea or recommendation it would be very nice, the userguide look so simple i'm frustrated that i'm not able to make it work and i can't see what i did wrong.

Thanks in advance

@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Jun 22, 2020
@adamdelarosa
Copy link

adamdelarosa commented Jun 22, 2020

I also have this issue.
version:

Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.4", GitCommit:"8d8aa39598534325ad77120c120a22b3a990b5ea", GitTreeState:"clean", BuildDate:"2020-03-12T21:03:42Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.8-eks-e16311", GitCommit:"e163110a04dcb2f39c3325af96d019b4925419eb", GitTreeState:"clean", BuildDate:"2020-03-27T22:37:12Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}

@daisuke-yoshimoto
Copy link

@noamran

You might be able to solve this problem by using an updated image of amazon/aws-efs-csi-driver, as shown in the stackoverflow article below.

https://stackoverflow.com/questions/62447132/mounting-efs-in-eks-cluster-example-deployment-fails

@nmtulloch27
Copy link

I have this issue as well.

@ossareh
Copy link
Contributor

ossareh commented Jun 23, 2020

We have this issue also.

MountVolume.SetUp failed for volume "efs-collab-jhub-pv" : kubernetes.io/csi: mounter.SetupAt failed: rpc error: code = Internal desc = Could not mount "" at "/var/lib/kubelet/pods/faae4c88-b879-48a0-bafe-3f9d387218b8/volumes/kubernetes.io~csi/efs-collab-jhub-pv/mount": mount failed: waitid: no child processes
Mounting command: mount
Mounting arguments: -t efs /var/lib/kubelet/pods/faae4c88-b879-48a0-bafe-3f9d387218b8/volumes/kubernetes.io~csi/efs-collab-jhub-pv/mount
Output: mount: /var/lib/kubelet/pods/faae4c88-b879-48a0-bafe-3f9d387218b8/volumes/kubernetes.io~csi/efs-collab-jhub-pv/mount: can't find in /etc/fstab.
  Warning  FailedMount  24s                  kubelet, ip-10-0-17-229.ec2.internal  Unable to attach or mount volumes: unmounted volumes=[efs-collab-jhub-pvc], unattached volumes=[volume-mike-40erisyon-2ecom efs-collab-jhub-pvc]: timed out waiting for the condition
  Warning  FailedMount  17s (x8 over 2m27s)  kubelet, ip-10-0-17-229.ec2.internal  MountVolume.SetUp failed for volume "efs-collab-jhub-pv" : kubernetes.io/csi: mounter.SetupAt failed: rpc error: code = Internal desc = Could not mount "" at "/var/lib/kubelet/pods/faae4c88-b879-48a0-bafe-3f9d387218b8/volumes/kubernetes.io~csi/efs-collab-jhub-pv/mount": mount failed: exit status 1
Mounting command: mount
Mounting arguments: -t efs /var/lib/kubelet/pods/faae4c88-b879-48a0-bafe-3f9d387218b8/volumes/kubernetes.io~csi/efs-collab-jhub-pv/mount
Output: mount: /var/lib/kubelet/pods/faae4c88-b879-48a0-bafe-3f9d387218b8/volumes/kubernetes.io~csi/efs-collab-jhub-pv/mount: can't find in /etc/fstab.

On Sunday our efs csi pod died and restarted; I've been using :latest because of the issues with access points. That picked up the latest :latest which has #185 merged in. I've updated our PV and PVC definitions to match those in the examples directory for access points and now we have the error above.

pvc

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  annotations:
    pv.kubernetes.io/bind-completed: "yes"
    pv.kubernetes.io/bound-by-controller: "yes"
    volume.beta.kubernetes.io/storage-provisioner: efs.csi.aws.com
  creationTimestamp: "2020-06-23T00:33:17Z"
  finalizers:
  - kubernetes.io/pvc-protection
  name: efs-collab-jhub-pvc
  namespace: collab-jhub
  resourceVersion: "10284838"
  selfLink: /api/v1/namespaces/collab-jhub/persistentvolumeclaims/efs-collab-jhub-pvc
  uid: 52fabc1b-3d76-4b98-bd96-eedb15c13fd0
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 5Gi
  storageClassName: efs-collab-jhub
  volumeMode: Filesystem
  volumeName: efs-collab-jhub-pv
status:
  accessModes:
  - ReadWriteMany
  capacity:
    storage: 5Gi
  phase: Bound

pv

apiVersion: v1
kind: PersistentVolume
metadata:
  annotations:
    pv.kubernetes.io/bound-by-controller: "yes"
  creationTimestamp: "2020-06-23T00:33:17Z"
  finalizers:
  - kubernetes.io/pv-protection
  name: efs-collab-jhub-pv
  resourceVersion: "10284836"
  selfLink: /api/v1/persistentvolumes/efs-collab-jhub-pv
  uid: 62f6a6e7-f43d-4666-becc-e725e57f7599
spec:
  accessModes:
  - ReadWriteMany
  capacity:
    storage: 5Gi
  claimRef:
    apiVersion: v1
    kind: PersistentVolumeClaim
    name: efs-collab-jhub-pvc
    namespace: collab-jhub
    resourceVersion: "10284833"
    uid: 52fabc1b-3d76-4b98-bd96-eedb15c13fd0
  csi:
    driver: efs.csi.aws.com
    volumeHandle: fs-<id>::fsap-<id>
  persistentVolumeReclaimPolicy: Retain
  storageClassName: efs-collab-jhub
  volumeMode: Filesystem
status:
  phase: Bound

@nomopo45
Copy link
Author

I was able to solve the problem using an updated image of amazon/aws-efs-csi-driver, as shown in the stackoverflow article below.

https://stackoverflow.com/questions/62447132/mounting-efs-in-eks-cluster-example-deployment-fails

Here are all my steps:

Using EFS with EKS

Follow this doc : https://docs.aws.amazon.com/eks/latest/userguide/efs-csi.html

Be careful when you install the csi driver make sure to use the image amazon/aws-efs-csi-driver:latest

after installing csi driver check your efs-csi-node name with 'kubectl get pod -n kube-system'

Then check the Image version with 'kubectl describe pod efs-csi-node-xxxxx -n kube-system'

if you don't have Image: amazon/aws-efs-csi-driver:latest you need to update, to do so :

  1. git clone https://github.com/kubernetes-sigs/aws-efs-csi-driver.git

  2. just change the newTag field in the file aws-efs-csi-driver/deploy/kubernetes/overlays/stable/kustomization.yaml

From

  • name: amazon/aws-efs-csi-driver
    newTag: v0.3.0

To

  • name: amazon/aws-efs-csi-driver
    newTag: latest

Then apply with the option -k because it's a kustomization kind, it must point a folder for exemple: 'kubectl apply -k aws-efs-csi-driver/deploy/kubernetes/overlays/stable/'

Then it should be ok you can check using the command to describe the pod and keep following the aws documentation mentioned in the beginning.

@nmtulloch27
Copy link

Still didnt resolve my issue... i am getting Mounting arguments: -t efs -o tls fs-xxxxxxx:/ /var/lib/kubelet/pods/xxxxx-xxxx-xxx-xxx/volumes/kubernetes.io~csi/efs-pv/mount

@ossareh
Copy link
Contributor

ossareh commented Jun 23, 2020

@nomopo45 I feel 100% sure that I'm using the latest amazon/aws-efs-csi-driver. Are you able to clarify which sha you're using?

$ kubectl -n kube-system describe pod efs-csi-node-b5l54
.. snip ...
Containers:
  efs-plugin:
    Container ID:  docker://b4910705051a237dfc628f78b1d738c25a90e6076cf9360fe9abdd49ac8182d9
    Image:         amazon/aws-efs-csi-driver:latest
    Image ID:      docker-pullable://amazon/aws-efs-csi-driver@sha256:43fb05d4544230010ecb8d7619a4d3b6c2c317a73a79925638501cc82754843c
    Port:          9809/TCP
    Host Port:     9809/TCP
    Args:
      --endpoint=$(CSI_ENDPOINT)
      --logtostderr
      --v=5
    State:          Running
      Started:      Mon, 22 Jun 2020 22:58:55 +0000
    Ready:          True
    Restart Count:  0
    Liveness:       http-get http://:healthz/healthz delay=10s timeout=3s period=2s #success=1 #failure=5
    Environment:
      CSI_ENDPOINT:  unix:/csi/csi.sock
    Mounts:
      /csi from plugin-dir (rw)
      /var/lib/kubelet from kubelet-dir (rw)
      /var/run/efs from efs-state-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-mkx47 (ro)

As you can see I'm using docker-pullable://amazon/aws-efs-csi-driver@sha256:43fb05d4544230010ecb8d7619a4d3b6c2c317a73a79925638501cc82754843c with the configuration above, and I can assure you there is a problem with this setup. It could be the case that there is a misconfiguration somewhere - are you able to suggest where that may be?

@nmtulloch27
Copy link

@nomopo45 I feel 100% sure that I'm using the latest amazon/aws-efs-csi-driver. Are you able to clarify which sha you're using?

$ kubectl -n kube-system describe pod efs-csi-node-b5l54
.. snip ...
Containers:
  efs-plugin:
    Container ID:  docker://b4910705051a237dfc628f78b1d738c25a90e6076cf9360fe9abdd49ac8182d9
    Image:         amazon/aws-efs-csi-driver:latest
    Image ID:      docker-pullable://amazon/aws-efs-csi-driver@sha256:43fb05d4544230010ecb8d7619a4d3b6c2c317a73a79925638501cc82754843c
    Port:          9809/TCP
    Host Port:     9809/TCP
    Args:
      --endpoint=$(CSI_ENDPOINT)
      --logtostderr
      --v=5
    State:          Running
      Started:      Mon, 22 Jun 2020 22:58:55 +0000
    Ready:          True
    Restart Count:  0
    Liveness:       http-get http://:healthz/healthz delay=10s timeout=3s period=2s #success=1 #failure=5
    Environment:
      CSI_ENDPOINT:  unix:/csi/csi.sock
    Mounts:
      /csi from plugin-dir (rw)
      /var/lib/kubelet from kubelet-dir (rw)
      /var/run/efs from efs-state-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-mkx47 (ro)

As you can see I'm using docker-pullable://amazon/aws-efs-csi-driver@sha256:43fb05d4544230010ecb8d7619a4d3b6c2c317a73a79925638501cc82754843c with the configuration above, and I can assure you there is a problem with this setup. It could be the case that there is a misconfiguration somewhere - are you able to suggest where that may be?

I concur...I have pulled the latest and the issue still persist.

@ossareh
Copy link
Contributor

ossareh commented Jun 23, 2020

@nmtulloch27 I think I'm on to something; just to confirm - are you using the helm chart for your installation or the kustomize based solution that @nomopo45 suggested?

ossareh added a commit to ossareh/aws-efs-csi-driver that referenced this issue Jun 23, 2020
In the absense of a regular release cycle the community has started using `:latest`.
`helm/values.yaml` provides a `pullPolicy` for the main container, however that is not threaded
through to the daemonset; as such kuberentes applies the default "IfNotPresent" value. From a users
point of view you're locked into the image at the time you first installed the chart. Additionally
you can now specify a pull policy for each of the side car containers also.

fixes kubernetes-sigs#192
@nmtulloch27
Copy link

@nmtulloch27 I think I'm on to something; just to confirm - are you using the helm chart for your installation or the kustomize based solution that @nomopo45 suggested?

i am using kustomize based solution that @nomopo45 suggested

ossareh added a commit to ossareh/aws-efs-csi-driver that referenced this issue Jun 23, 2020
ossareh added a commit to ossareh/aws-efs-csi-driver that referenced this issue Jun 23, 2020
@ossareh
Copy link
Contributor

ossareh commented Jun 23, 2020

@nmtulloch27 hmm, OK. If you take a look at #193 you can see I was getting tripped up by imagePullPolicy not being set correctly. If you add "imagePullPolicy: Always" under (or around) this line you'll pull the actual latest, not the version that you pulled when you first installed the driver.

@nmtulloch27
Copy link

@nmtulloch27 hmm, OK. If you take a look at #193 you can see I was getting tripped up by imagePullPolicy not being set correctly. If you add "imagePullPolicy: Always" under (or around) this line you'll pull the actual latest, not the version that you pulled when you first installed the driver.

I see, so then you would run kubectl apply -k aws-efs-csi-driver/deploy/kubernetes/base instead of kubectl apply -k aws-efs-csi-driver/deploy/kubernetes/overlays/stable/ is that correct?

@2uasimojo
Copy link
Contributor

Say, b3baff8 added a new mount in the efs-plugin container, and relies on it existing in the watchdog. If you bounced your workers, but didn't reapply the DaemonSet from latest, that could cause failures that might look like this.

2uasimojo added a commit to 2uasimojo/aws-efs-operator that referenced this issue Jun 23, 2020
Since the [fix](kubernetes-sigs/aws-efs-csi-driver#185) for [issue #167](kubernetes-sigs/aws-efs-csi-driver#167) merged, the AWS EFS CSI driver overwrote the [`latest` image tag](sha256-962619a5deb34e1c4257f2120dd941ab026fc96adde003e27f70b65023af5a07?context=explore) to include it.

For starters, this means we can update this operator to use the [new method of specifying access points](https://github.com/kubernetes-sigs/aws-efs-csi-driver/tree/0ae998c5a95fe6dbee7f43c182997e64872695e6/examples/kubernetes/access_points#edit-persistent-volume-spec) via a colon-delimited `volumeHandle` as opposed to in `mountOptions`.

However, the same update to `latest` also brought in a [commit](kubernetes-sigs/aws-efs-csi-driver@b3baff8) that requires an additional mount in the `efs-plugin` container of the DaemonSet. So we need to update our YAML for that resource at the same time, or everything is broken (this might be upstream [issue #192](kubernetes-sigs/aws-efs-csi-driver#192). This update to the DaemonSet YAML also syncs with [upstream](https://github.com/kubernetes-sigs/aws-efs-csi-driver/blob/0ae998c5a95fe6dbee7f43c182997e64872695e6/deploy/kubernetes/base/node.yaml) by bumping the image versions for the other two containers (csi-node-driver-registrar: v1.1.0 => v1.3.0; and livenessprobe: v1.1.0 => livenessprobe:v2.0.0).
@ossareh
Copy link
Contributor

ossareh commented Jun 23, 2020

@nmtulloch27 not necessarily. The base doesn't have an "imagePullPolicy" specified; so you'd need to add that. Unfortunately Kustomize doesn't allow changing the pullPolicy via a transformer based on this thread: kubernetes-sigs/kustomize#1493 (caveat: I don't use Kustomize, so maybe I'm reading this incorrectly).

So here's what I recommend, for now:

  1. clone this repo
  2. make the change I mentioned above, that is: add "imagePullPolicy: Always" to node.yaml
  3. kubectl apply -k <local checkout of your clone>/deploy/kubernetes/overlays/dev

The dev overlay specifies the ":latest" image, and the imagePullPolicy change ensures you're pulling the actual latest rather than the latest from whenever you first ran that command.

@ossareh
Copy link
Contributor

ossareh commented Jun 23, 2020

@nmtulloch27 I was intrigued enough to check it out myself; see #195 for a branch which implements setting the imagePullPolicy as discussed.

@nmtulloch27
Copy link

@nmtulloch27 I was intrigued enough to check it out myself; see #195 for a branch which implements setting the imagePullPolicy as discussed.

I made the changes and applied them as per your instructions. It seems to have resolved that issue. However i am getting "MountVolume.MountDevice failed for volume "efs-pv" : rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /var/lib/kubelet/plugins/efs.csi.aws.com/csi.sock: connect: connection refused". Did you experience this as well?

@ossareh
Copy link
Contributor

ossareh commented Jun 23, 2020

@nmtulloch27 I did not.

What does: kubectl exec $(kubectl get po -l app=efs-csi-node -n kube-system -o jsonpath='{.items[0].metadata.name}') -n kube-system -it efs-plugin -- cat /var/log/amazon/efs/mount.log return for you (it's likely to be long, so perhaps just provide the final ~50 lines in a gist).

@nmtulloch27
Copy link

@nmtulloch27 I did not.

What does: kubectl exec $(kubectl get po -l app=efs-csi-node -n kube-system -o jsonpath='{.items[0].metadata.name}') -n kube-system -it efs-plugin -- cat /var/log/amazon/efs/mount.log return for you (it's likely to be long, so perhaps just provide the final ~50 lines in a gist).

Hey i have placed the gist below:
https://gist.github.com/nmtulloch27/61e5ee77a29e74e7f43d3a07f858da3d

@ossareh
Copy link
Contributor

ossareh commented Jun 23, 2020

@nmtulloch27 that isn't at all what I was expecting but it clearly calls out the image is not there to be fetched. I'll ask the appropriate question over in #195

@nmtulloch27
Copy link

@nmtulloch27 that isn't at all what I was expecting but it clearly calls out the image is not there to be fetched. I'll ask the appropriate question over in #195

Okay, i'll move to that thread.

@ossareh
Copy link
Contributor

ossareh commented Jun 23, 2020

@nmtulloch27 I've pushed an update to my branch; give it a shot and update #195 with success / failure.

@nmtulloch27
Copy link

nmtulloch27 commented Jun 23, 2020

@nmtulloch27 I've pushed an update to my branch; give it a shot and update #195 with success / failure.

Okay cool..how do i update it with success or failure? Still fairly new to the site. Also, running the new build i am getting this error now "error: json: cannot unmarshal object into Go struct field Kustomization.patchesStrategicMerge of type patch.StrategicMerge" i think is because of how the kustomization is structured.

@ossareh
Copy link
Contributor

ossareh commented Jun 24, 2020

@nmtulloch27 oh, interesting. This is going back to my earlier comment of "I'm not familiar with kustomize". It turns out there are two ways to call kustomize:

kustomize build deploy/kubernetes/overlays/dev and kubectl apply -k deploy/kubernetes/overlays/dev. My testing uses the former, but when I use the latter I get the same error you do. I'll look into it now.

RE: "update with success/ failure" - I just meant: write a message saying: "this works for me", or "this doesn't work for me" :)

I'm done for the day, but I'll be back tomorrow to take a look at this issue.

@nmtulloch27
Copy link

@nmtulloch27 oh, interesting. This is going back to my earlier comment of "I'm not familiar with kustomize". It turns out there are two ways to call kustomize:

kustomize build deploy/kubernetes/overlays/dev and kubectl apply -k deploy/kubernetes/overlays/dev. My testing uses the former, but when I use the latter I get the same error you do. I'll look into it now.

RE: "update with success/ failure" - I just meant: write a message saying: "this works for me", or "this doesn't work for me" :)

I'm done for the day, but I'll be back tomorrow to take a look at this issue.

Oh haha and okay cool... have a good night!

@ossareh
Copy link
Contributor

ossareh commented Jun 24, 2020

@nmtulloch27 I can't help myself! I took a look and brought both options kustomize build... and kubectl apply -k into line with one another, I think. Can you give it a go? I think it's probably just best if you clone github.com/ossareh/aws-efs-csi-driver check out the fix_kustomize_dev_overlay branch and run the kubectl apply -k from there.

@nmtulloch27
Copy link

nmtulloch27 commented Jun 24, 2020

@nmtulloch27 I can't help myself! I took a look and brought both options kustomize build... and kubectl apply -k into line with one another, I think. Can you give it a go? I think it's probably just best if you clone github.com/ossareh/aws-efs-csi-driver check out the fix_kustomize_dev_overlay branch and run the kubectl apply -k from there.

Haha nice, I'll will do that right now! This is what i see now:
"error: error validating "aws-efs-csi-driver/deploy/kubernetes/overlays/dev/": error validating data: ValidationError(DaemonSet.spec.template.spec.containers): invalid type for io.k8s.api.core.v1.PodSpec.containers: got "map", expected "array"; if you choose to ignore these errors, turn validation off with --validate=false"

Did you see this as well?

Its complaining about the latest_image file.. under containers maybe efs-plugin: should be - name: efs-plugin
-- yup it ran after i changed it to that.

@ossareh
Copy link
Contributor

ossareh commented Jun 24, 2020

@nmtulloch27 thanks - that was super helpful.

@tom-beatdapp
Copy link

Just for anyone running into this issue, if I can help save you some debugging time. I saw this issue when I used the incorrect Security Groups for my EFS filesystem. Worth double checking this. The SG applied to each mount target should be the one that "Allows inbound NFS traffic from within the VPC" which is created in these steps: https://docs.aws.amazon.com/eks/latest/userguide/efs-csi.html

@nomopo45 nomopo45 reopened this Jul 2, 2020
@nomopo45
Copy link
Author

nomopo45 commented Jul 2, 2020

So i closed the ticket because i thought the issue was solved since it worked for me after using the latest image and not the v0.3.0, but after trying again today it's not working anymore, even with the latest image, i checked my SG also and i fully opened it just to test but it is still not working (same error as before). I also tried the fix proposed by @ossareh by cloning his repo and apling the change but i run in the same issue each time.

Here is the error i now have :

pod has unbound immediate PersistentVolumeClaims
MountVolume.SetUp failed for volume "efs-pv" : kubernetes.io/csi: mounter.SetupAt failed: rpc error: code = Internal desc = Could not mount "fs-3b52ddd6:/" at "/var/lib/kubelet/pods/d3115c30-8ced-4fa0-a088-d72688272848/volumes/kubernetes.iocsi/efs-pv/mount": mount failed: exit status 1 Mounting command: mount Mounting arguments: -t efs fs-3b52ddd6:/ /var/lib/kubelet/pods/d3115c30-8ced-4fa0-a088-d72688272848/volumes/kubernetes.iocsi/efs-pv/mount Output: Traceback (most recent call last): File "/sbin/mount.efs", line 1537, in main() File "/sbin/mount.efs", line 1517, in main bootstrap_logging(config) File "/sbin/mount.efs", line 1187, in bootstrap_logging raw_level = config.get(CONFIG_SECTION, 'logging_level') File "/lib64/python2.7/ConfigParser.py", line 607, in get raise NoSectionError(section) ConfigParser.NoSectionError: No section: 'mount'

@2uasimojo
Copy link
Contributor

@nomopo45 Since #202, latest isn't actually latest anymore. You may be bouncing off of #196.

From now until we have a new release, I would suggest keeping your YAML and image tag in sync. For example, the current tip of the master branch is at commit 778131e, which corresponds to image tag 778131e. I do this in my operator by locking the YAML from that commit to that image tag.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 30, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Oct 30, 2020
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

9 participants