Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multiple PVC on same pod with the same access point #852

Closed
andre-lx opened this issue Dec 13, 2022 · 4 comments
Closed

Multiple PVC on same pod with the same access point #852

andre-lx opened this issue Dec 13, 2022 · 4 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@andre-lx
Copy link

andre-lx commented Dec 13, 2022

/kind bug

What happened?

Hello.

I'm still facing thie issue #167 using the same access point in two different PV mounted to the same pod.

On azure, using their csi driver this does not happen, since the volume handle is always different. For example:

volumeHandle: xxxxxx#xxxx331#shared##pvc-xxxx#e7048599-f033-404a-9377-775c43621e97

Using the efs csi driver the volume handle is always the same, so this will lead to this error. As an example:

volumeHandle: fs-xxx::fsap-xxxx

How to reproduce it (as minimally and precisely as possible)?

apiVersion: v1
kind: PersistentVolume
metadata:
  name: volume1
spec:
  accessModes:
  - ReadWriteMany
  capacity:
    storage: 1Ti
  csi:
    driver: efs.csi.aws.com
    volumeHandle: fs-xxx::fsap-xxx
  persistentVolumeReclaimPolicy: Retain
  storageClassName: sc
  volumeMode: Filesystem
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: volume1-claim
spec:
  storageClassName: sc
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Ti
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: volume2
spec:
  accessModes:
  - ReadWriteMany
  capacity:
    storage: 1Ti
  csi:
    driver: efs.csi.aws.com
    volumeHandle: fs-xxx::fsap-xxx
  persistentVolumeReclaimPolicy: Retain
  storageClassName: sc
  volumeMode: Filesystem
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: volume2-claim
spec:
  storageClassName: sc
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Ti
---
apiVersion: v1
kind: Pod
metadata:
  name: test
spec:
  containers:
  - image: nginx:latest
    name: nginx
    imagePullPolicy: IfNotPresent
    volumeMounts:
    - mountPath: /mnt/xxx/
      name: volume1
      subPath: a/b
    - mountPath: /mnt/yyy/
      name: volume2
      subPath: c/d
  volumes:
  - name: volume1
    persistentVolumeClaim:
      claimName: volume1-claim
  - name: volume2
    persistentVolumeClaim:
      claimName: volume2-claim

What you expected to happen?

Should be possible to mount two pvc to the same pod that uses the same access point.

I'm not sure entirely if the volumeHandle is the issue, but looks like it is.
I'm also not sure if there is anything that could be done on the efs csi driver, since this is the right behaviour on the kubernetes side. But we are facing this issue, and we would like to know if there are any solution to mitigate this.

Thanks, André Vieira

Environment

  • Kubernetes version (use kubectl version): 1.22
  • Driver version: 1.4.7
@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Dec 13, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 13, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 12, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale May 12, 2023
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

3 participants