Skip to content

Commit

Permalink
Add more description about IP pool
Browse files Browse the repository at this point in the history
Co-authored-by: Lucas Saintarbor <lucas.saintarbor@suse.com>
  • Loading branch information
yaocw2020 and LucasSaintarbor committed Aug 10, 2023
1 parent 9b089ab commit 3703f4e
Show file tree
Hide file tree
Showing 3 changed files with 119 additions and 46 deletions.
87 changes: 81 additions & 6 deletions docs/networking/ippool.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ keywords:
---
_Available as of v1.2.0_

Harvester IP Pool is a built-in IP address management (IPAM) solution exclusively available to Harvester Load Balancers (LBs).
Harvester IP Pool is a built-in IP address management (IPAM) solution exclusively available to Harvester load balancers (LBs).

## Features
- **Multiple IP ranges:** Each IP pool can contain multiple IP ranges or CIDRs.
Expand All @@ -25,16 +25,91 @@ An IP pool can have specific scopes, and you can specify the corresponding requi
- Network is a hard condition. The optional IP pool must match the value of the LB annotation `loadbalancer.harvesterhci.io/network`.
- Every IP pool, except the global IP pool, has a unique scope different from others if its priority is `0`. The project, namespace, or cluster name of LBs should be in the scope of the IP pool if they want to get an IP from this pool.
- `spec.selector.priority` specifies the priority of the IP Pool. The larger the number, the higher the priority. If the priority is not `0`, the value should differ. The priority helps you to migrate the old IP pool to the new one.
- If the IP Pool has a scope that matches all projects, namespaces, and guest clusters, it's called a global IP pool. It's only allowed to have one global IP pool. If there is no IP pool matching the requirements of the LB, the IPAM will allocate an IP from the global IP pool if it exists.
- If the IP Pool has a scope that matches all projects, namespaces, and guest clusters, it's called a global IP pool, and only one global IP pool is allowed. If there is no IP pool matching the requirements of the LB, the IPAM will allocate an IP address from the global IP pool if it exists.

### Examples
- **Example 1:** You want to configure an IP pool with the range `192.168.100.0/24` for the namespace `default`. All the load balancers in the namespace `default` get an IP from this IP pool. For an example, refer to the following `.yaml`:

```yaml
apiVersion: networking.harvesterhci.io/v1beta1
kind: IPPool
metadata:
name: default-ip-pool
spec:
ranges:
- subnet: 192.168.100.0/24
selector:
scope:
namespace: default
```
- **Example 2:** You have a guest cluster `rke2` deployed with network `default/vlan1` in the project/namespace `product/default` and want to configure an exclusive IP pool with range `192.168.10.10-192.168.10.20` for it. For an example, refer to the following `.yaml`:

```yaml
apiVersion: networking.harvesterhci.io/v1beta1
kind: IPPool
metadata:
name: rke2-ip-pool
spec:
ranges:
- subnet: 192.168.10.0/24
rangeStart: 192.168.10.10
rangeEnd: 192.168.10.20
selector:
network: default/vlan1
scope:
project: product
namespace: default
cluster: rke2
```

- **Example 3:** You want to migrate the IP pool `default-ip-pool` to a different IP pool `default-ip-pool-2` with range `192.168.200.0/24`. The IP pool `default-ip-pool` has a higher priority than `default-ip-pool`. For an example, refer to the following `.yaml`:

```yaml
apiVersion: networking.harvesterhci.io/v1beta1
kind: IPPool
metadata:
name: default-ip-pool-2
spec:
ranges:
- subnet: 192.168.200.0/24
selector:
priority: 1
scope:
namespace: default
```

- **Example 4:** You want to configure a global IP pool with range `192.168.20.0/24`. For an example, refer to the following `.yaml`:

```yaml
apiVersion: networking.harvesterhci.io/v1beta1
kind: IPPool
metadata:
name: global-ip-pool
spec:
ranges:
- subnet: 192.168.20.0/24
selector:
scope:
project: "*"
namespace: "*"
cluster: "*"
```

## Allocation policy
- The IP pool prefers to allocate the previously assigned IP according to the given history.
- IP allocation follows the round-robin policy.
- The IP pool prefers to allocate the previously assigned IP address according to the given history.
- IP address allocation follows the round-robin policy.

## How to create
To create a new VM load balancer:

1. Go to the **Networks > IP Pools** page and select **Create**.
1. Go to the **Networks** > **IP Pools** page and select **Create**.
1. Specify the **Name** of the IP pool.
1. Go to the **Range** tab to specify the **IP ranges** for the IP pool. You can add multiple IP ranges.
1. Go to the **Selector** tab to specify the **Scope** and **Priority** of the IP pool.
1. Go to the **Selector** tab to specify the **Scope** and **Priority** of the IP pool.

:::note

Starting with Harvester v1.2.0, the `vip-pools` setting is deprecated. After upgrading, the `vip-pools` setting will be automatically converted to IP pools.

:::
16 changes: 8 additions & 8 deletions docs/networking/loadbalancer.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ keywords:
---
_Available as of v1.2.0_

Harvester load balancer is a built-in Layer 4 load balancer that can be used to distribute incoming traffic across workloads deployed on Harvester virtual machines (VMs) or guest Kubernetes clusters.
The Harvester load balancer (LB) is a built-in Layer 4 load balancer that distributes incoming traffic across workloads deployed on Harvester virtual machines (VMs) or guest Kubernetes clusters.

## VM load balancer

Expand All @@ -23,24 +23,24 @@ Harvester VM load balancer supports the following features:
### Limitations
Harvester VM load balancer has the following limitations:

- **Namespace restriction:** This restriction is in place to facilitate permission management and ensures the LB only uses VMs in the same namespace as the backend servers.
- **Namespace restriction:** This restriction facilitates permission management and ensures the LB only uses VMs in the same namespace as the backend servers.
- **IPv4-only:** The LB is only compatible with IPv4 addresses for VMs.
- **Guest agent installation:** Installing the guest agent on each backend VM is required to obtain IP addresses.
- **Connectivity Requirement:** Network connectivity must be established between backend VMs and Harvester hosts. When a VM has multiple IP addresses, the LB will select the first one as the backend address.
- **Access Restriction:** The VM LB address is exposed only within the same network as the Harvester hosts. If you wish to access the LB from outside the network, you must provide a route from outside to the LB address.
- **Access Restriction:** The VM LB address is exposed only within the same network as the Harvester hosts. To access the LB from outside the network, you must provide a route from outside to the LB address.

### How to create
To create a new VM load balancer:
To create a new Harvester VM load balancer:
1. Go to the **Networks > Load Balancer** page and select **Create**.
1. Select the **Namespace** and specify the **Name**.
1. Go to the **Basic** tab to choose the IPAM mode, which can be **DHCP** or **IP Pool**. If you select **IP Pool**, you must prepare an IP pool first, specify the IP pool name, or choose **auto**. If you choose **auto**, the LB will automatically select an IP pool according to the matching rules.
1. Go to the **Basic** tab to choose the IPAM mode, which can be **DHCP** or **IP Pool**. If you select **IP Pool**, prepare an IP pool first, specify the IP pool name, or choose **auto**. If you choose **auto**, the LB automatically selects an IP pool according to [the IP pool selection policy](/networking/ippool.md/#selection-policy).
1. Go to the **Listeners** tab to add listeners. You must specify the **Port**, **Protocol**, and **Backend Port** for each listener.
1. Go to the **Backend Server Selector** tab to add label selectors. If you want to add the VM to the LB, go to the **Virtual Machine > Instance Labels** tab to add the corresponding labels to the VM.
1. Go to the **Backend Server Selector** tab to add label selectors. To add the VM to the LB, go to the **Virtual Machine > Instance Labels** tab to add the corresponding labels to the VM.
1. Go to the **Health Check** tab to enable health check and specify the parameters, including the **Port**, **Success Threshold**, **Failure Threshold**, **Interval**, and **Timeout** if the backend service supports health check.

## Guest Kubernetes cluster load balancer
In conjunction with Harvester Cloud Provider, the Harvester load balancer provides load balancing for LB services in the guest cluster.

When you create, update, or delete a LB service on a guest cluster with Harvester Cloud Provider, the Harvester Cloud Provider will create a Harvester LB automatically.
When you create, update, or delete an LB service on a guest cluster with Harvester Cloud Provider, the Harvester Cloud Provider will create a Harvester LB automatically.

Refer to [Harvester Cloud Provider](/rancher/cloud-provider.md) for more details.
For more details, refer to [Harvester Cloud Provider](/rancher/cloud-provider.md).
62 changes: 30 additions & 32 deletions docs/rancher/cloud-provider.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,32 +44,41 @@ When spinning up an RKE2 cluster using the Harvester node driver, select the `Ha

![](/img/v1.2/rancher/rke2-cloud-provider.png)

### Deploying to the K3s Cluster with Harvester Node Driver [Experimental]
### Deploying to the RKE2 Custom Cluster

When spinning up a K3s cluster using the Harvester node driver, you can perform the following steps to deploy the harvester cloud provider:
1. Use `generate_addon.sh` to generate cloud config and place it into directory `/var/lib/rancher/rke2/etc/config-files/cloud-config` on every node.

1. Generate and inject cloud config for `harvester-cloud-provider`
```
curl -sfL https://raw.githubusercontent.com/harvester/cloud-provider-harvester/master/deploy/generate_addon.sh | bash -s <serviceaccount name> <namespace>
```

The cloud provider needs a kubeconfig file to work, a limited scoped one can be generated using the [generate_addon.sh](https://raw.githubusercontent.com/harvester/cloud-provider-harvester/master/deploy/generate_addon.sh) script available in the [harvester/cloud-provider-harvester](https://github.com/harvester/cloud-provider-harvester) repo.
:::note

:::note
The `generate_addon.sh` script depends on `kubectl` and `jq` to operate the Harvester cluster

The script depends on `kubectl` and `jq` to operate the Harvester cluster
The script needs access to the `Harvester Cluster` kubeconfig to work.

The script needs access to the `Harvester Cluster` kubeconfig to work.
The namespace needs to be the namespace in which the guest cluster will be created.

The namespace needs to be the namespace in which the guest cluster will be created.
:::

:::

2. Select the `Harvester` cloud provider.

### Deploying to the K3s Cluster with Harvester Node Driver [Experimental]

When spinning up a K3s cluster using the Harvester node driver, you can perform the following steps to deploy the harvester cloud provider:

1. Generate and inject cloud config for `harvester-cloud-provider`

```
./deploy/generate_addon.sh <serviceaccount name> <namespace>
curl -sfL https://raw.githubusercontent.com/harvester/cloud-provider-harvester/master/deploy/generate_addon.sh | bash -s <serviceaccount name> <namespace>
```

The output will look as follows:

```
# ./deploy/generate_addon.sh harvester-cloud-provider default
# curl -sfL https://raw.githubusercontent.com/harvester/cloud-provider-harvester/master/deploy/generate_addon.sh | bash -s harvester-cloud-provider default
Creating target directory to hold files in ./tmp/kube...done
Creating a service account in default namespace: harvester-cloud-provider
W1104 16:10:21.234417 4319 helpers.go:663] --dry-run is deprecated and can be replaced with --dry-run=client.
Expand Down Expand Up @@ -147,7 +156,7 @@ spec:
bootstrap: true
repo: https://charts.harvesterhci.io/
chart: harvester-cloud-provider
version: 0.1.13
version: 0.2.2
helmVersion: v3
```

Expand Down Expand Up @@ -179,6 +188,7 @@ spec:

With these settings in place a K3s cluster should provision successfully while using the external cloud provider.


## Upgrade Cloud Provider

### Upgrade RKE2
Expand All @@ -202,31 +212,19 @@ After deploying the `Harvester Cloud provider`, you can use the Kubernetes `Load


### IPAM
Harvester's built-in load balancer supports both `pool` and `dhcp` modes. You can select the mode in the Rancher UI. Harvester adds the annotation `cloudprovider.harvesterhci.io/ipam` to the service behind.
Harvester's built-in load balancer supports both **DHCP** and **Pool** modes, and you can select the mode in the Rancher UI. Harvester adds the annotation `cloudprovider.harvesterhci.io/ipam` to the service. Additionally, Harvester cloud provider provides a special **Share IP** mode where a service will share its load balancer IP with other services.

- pool: You should configure an IP address pool in Harvester's `Settings` in advance. The Harvester LoadBalancer controller will allocate an IP address from the IP address pool for the load balancer.

![](/img/v1.2/rancher/vip-pool.png)

- dhcp: A DHCP server is required. The Harvester LoadBalancer controller will request an IP address from the DHCP server.
- **DCHP: ** A DHCP server is required. The Harvester load balancer controller will request an IP address from the DHCP server.

- **Pool:** You need an IP pool configured in the Harvester UI. The Harvester load balancer controller will allocate an IP for the load balancer service following [the IP pool selection policy](/networking/ippool.md/#selection-policy).

- **Share IP:** When creating a new load balancer service, you can select an existing load balancer service to get its load balancer IP. The new service is called the secondary service, and the currently chosen service is called the primary service. You can specify the primary service in the secondary service by annotation `cloudprovider.harvesterhci.io/primary-service`. There are two limitations:
- It's not allowed for the secondary service to share the IP with other services.
- All the services sharing the same IP can not have duplicated ports.

:::note

It is not allowed to modify the IPAM mode. You need to create a new service if you want to modify the IPAM mode.

:::

### Health Checks
The Harvester load balancer supports TCP health checks. You can specify the parameters in the Rancher UI if you enable the `Health Check` option.

![](/img/v1.2/rancher/health-check.png)

Alternatively, you can specify the parameters by adding annotations to the service manually. The following annotations are supported:

| Annotation Key | Value Type | Required | Description |
|:---|:---|:---|:---|
| `cloudprovider.harvesterhci.io/healthcheck-port` | string | true | Specifies the port. The prober will access the address composed of the backend server IP and the port.
| `cloudprovider.harvesterhci.io/healthcheck-success-threshold` | string | false | Specifies the health check success threshold. The default value is 1. The backend server will start forwarding traffic if the number of times the prober continuously detects an address successfully reaches the threshold.
| `cloudprovider.harvesterhci.io/healthcheck-failure-threshold` | string | false | Specifies the health check failure threshold. The default value is 3. The backend server will stop forwarding traffic if the number of health check failures reaches the threshold.
| `cloudprovider.harvesterhci.io/healthcheck-periodseconds` | string | false | Specifies the health check period. The default value is 5 seconds.
| `cloudprovider.harvesterhci.io/healthcheck-timeoutseconds` | string | false | Specifies the timeout of every health check. The default value is 3 seconds.

0 comments on commit 3703f4e

Please sign in to comment.