Skip to content
This repository has been archived by the owner on Sep 7, 2022. It is now read-only.

Fix locating resporce-pool specified in the vsphere.conf file #493

Closed
mansi-a opened this issue Jun 27, 2018 · 2 comments
Closed

Fix locating resporce-pool specified in the vsphere.conf file #493

mansi-a opened this issue Jun 27, 2018 · 2 comments
Assignees

Comments

@mansi-a
Copy link

mansi-a commented Jun 27, 2018

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug

What happened:
We need to use resourcepool-path when we use SPBM policies with dynamic provisioning in vSphere Cloud Provider or the PVC provisioning fails with error
Warning ProvisioningFailed 3m (x61 over 18m) persistentvolume-controller Failed to provision volume with StorageClass "xxx": default compute resource resolves to multiple instances, please specify

resourcepool-path is the path to resource pool where dummy VMs for Storage Profile Based volume provisioning should be created. It is optional parameter.

If admin changes the name of their VC cluster with above configuration, dynamic provisioning breaks.
Failed to provision volume with StorageClass "xxx": compute resource '/datacenter-xxx/host/cluster-xxx' not found.

Using an identifier based on MoID instead of path for compute resource would be more stable.

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

@divyenpatel
Copy link

We have a bug here. See the following code.

https://github.com/kubernetes/kubernetes/blob/master/pkg/cloudprovider/providers/vsphere/vclib/datacenter.go#L158:23

Call trace.
func (vs *VSphere) CreateVolume(volumeOptions *vclib.VolumeOptions) (canonicalVolumePath string, err error)
vmOptions, err = vs.setVMOptions(ctx, dc, vs.cfg.Workspace.ResourcePoolPath)
resourcePool, err := dc.GetResourcePool(ctx, resourcePoolPath)

Code block where fix needs to be made.

if computePath == "" {
     computeResource, err = finder.DefaultComputeResource(ctx)
} else {
     computeResource, err = finder.ComputeResource(ctx, computePath)
}

Here method accept the computePath which is actually a resourcepool-path set in the vsphere.conf file.
Look up is being made using finder.DefaultComputeResource(ctx) and finder.ComputeResource(ctx, computePath), which is not correct.

This can be resolved with using correct method - func (f *Finder) ResourcePoolOrDefault(ctx context.Context, path string) (*object.ResourcePool, error)
https://github.com/kubernetes/kubernetes/blob/master/vendor/github.com/vmware/govmomi/find/finder.go#L867

I will make above change. and try to cherry pick in older releases.

Regarding asking users to put MoID in the vsphere.conf file seems developer friendly, but not administrator friendly.

@divyenpatel divyenpatel changed the title Updating cluster name in vCenter could cause inconsistencies with dynamic provisioning Fix locating resporce-pool specified in the vsphere.conf file Jul 17, 2018
k8s-github-robot pushed a commit to kubernetes/kubernetes that referenced this issue Jul 17, 2018
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Fix locating resourcepool-path specified in the vsphere.conf file

**What this PR does / why we need it**:
When volume is provisioned using the vsphere storage policy, `resourcepool-path` specified in the `vsphere.conf` file is used for creating a shadow/dummy VM.  Dummy VM is temporarily created and then deleted once volume is created on the compatible Datastore.

At present If user specifies `resourcepool-path` in the `vsphere.conf` file,  volume provisioner is not able to locate the compute resource for the given path. This is because look up is made using `finder.DefaultComputeResource(ctx)` and `finder.ComputeResource(ctx, computePath)`, which is not correct. If user specifies name of the cluster or cluster path then provisioning works.

This is resolved with using correct govmomi method - `func (f *Finder) ResourcePoolOrDefault(ctx context.Context, path string) (*object.ResourcePool, error)`


**Which issue(s) this PR fixes**
Fixes # vmware-archive#493

**Special notes for your reviewer**:
Following testing is performed for this change.

1) specified resource-pool path in the `vsphere.conf` file and verified VM is created under the specified resource pool.

```
resourcepool-path="ClusterFolder-1/cluster-vsan-1/Resources/ShadowVMPool"
```

2) If resource pool is not available, specified cluster's default resource pool path in the `vsphere.conf` file and verified volume provisioning works. For this case, VM is directly created under cluster.

```
resourcepool-path="ClusterFolder-1/cluster-vsan-1/Resources"
```

3) Verified above with having multiple clusters with the same name in one datacenter.
4) Verified with empty resource pool path in the vsphere.conf file. 

```
resourcepool-path=""
```
As expected, provisioning is failing with `Failed to provision volume with StorageClass "vsan-gold-policy": no default resource pool found`.

Refer to this datacenter inventory for the path specified in the `resourcepool-path` configuration.

![image](https://user-images.githubusercontent.com/22985595/42792922-738e3f9c-892c-11e8-9e51-32e2328b116b.png)

Current documentation describes `resourcepool-path`configuration is optional, which needs to be corrected once PR is merged. For policy based provisioning this is not an optional parameter.

Documentation link: https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/existing.html



**Release note**:

```release-note
Fix for resourcepool-path configuration in the vsphere.conf file.
```

cc: @kubernetes/vmware
@divyenpatel
Copy link

Closing this issue. PR merged to master branch. I will cherry pick this change to other release branches.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

2 participants