Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use kubebuilder 1.16 instead of the code generator directly for CRDs #16

Closed
antoninbas opened this issue Nov 4, 2019 · 3 comments
Closed
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@antoninbas
Copy link
Contributor

No description provided.

@jianjuns
Copy link
Contributor

jianjuns commented Nov 5, 2019

So, even we have CRDs, but our current usage is quite simple - we do not really need a CRD controller, but just get and update the CRDs in K8s API. The CRDs are mainly for exposing Antrea runtime information to users and UI for debugging puroposes.
@timothysc : in this case do you still see benefits of switching to kubebuilder?

@tnqn @mengdie-song

@abhiraut
Copy link
Contributor

abhiraut commented Nov 5, 2019

we may build CRDs in future which might not be as simple as this.. probably better to switch considering future?

antoninbas added a commit to antoninbas/antrea that referenced this issue Jan 10, 2020
This is still very much a work-in-progress, I'm opening this PR to
gather feedback on the approach.

For someone deleting Antrea, the steps would be as follows:
 * `kubectl delete -f <path to antrea.yml>`
 * `kubectl apply -f <path to antrea-cleanup.yml>`
 * check that job has completed with `kubectl -n kube-system get jobs`
 * `kubectl delete -f <path to antrea-cleanup.yml>`

The cleanup manifest creates a DaemonSet that will perform the necessary
deletion tasks on each Node. After the tasks have been completed, the
"status" is reported to the cleanup controller through a custom
resource. Once the controller has received enough statuses (or after a
timeout of 1 minutes) the controller job completes and the user can
delete the cleanup manifest.

Known remaining items:
 * place cleanup binaries (antrea-cleanup-agent and
 antrea-cleanup-controller) in a separate docker image to avoid
 increasing the size of the main Antrea docker image
 * generate manifest with kustomize?
 * find a way to test this as part of CI?
 * update documentation
 * additional cleanup tasks: as of now we only take care of deleting the
 OVS bridge
 * place cleanup CRD in non-default namespace
 * use kubebuilder instead of the code generator directly (related to
 antrea-io#16); we probably want to punt this task to a future PR.

See antrea-io#181
antoninbas added a commit to antoninbas/antrea that referenced this issue Jan 10, 2020
This is still very much a work-in-progress, I'm opening this PR to
gather feedback on the approach.

For someone deleting Antrea, the steps would be as follows:
 * `kubectl delete -f <path to antrea.yml>`
 * `kubectl apply -f <path to antrea-cleanup.yml>`
 * check that job has completed with `kubectl -n kube-system get jobs`
 * `kubectl delete -f <path to antrea-cleanup.yml>`

The cleanup manifest creates a DaemonSet that will perform the necessary
deletion tasks on each Node. After the tasks have been completed, the
"status" is reported to the cleanup controller through a custom
resource. Once the controller has received enough statuses (or after a
timeout of 1 minutes) the controller job completes and the user can
delete the cleanup manifest.

Known remaining items:
 * place cleanup binaries (antrea-cleanup-agent and
 antrea-cleanup-controller) in a separate docker image to avoid
 increasing the size of the main Antrea docker image
 * generate manifest with kustomize?
 * find a way to test this as part of CI?
 * update documentation
 * additional cleanup tasks: as of now we only take care of deleting the
 OVS bridge
 * place cleanup CRD in non-default namespace
 * use kubebuilder instead of the code generator directly (related to
 antrea-io#16); we probably want to punt this task to a future PR.

See antrea-io#181
@github-actions
Copy link
Contributor

This issue is stale because it has been open 180 days with no activity. Remove stale label or comment, or this will be closed in 180 days

@github-actions github-actions bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 18, 2020
zyiou referenced this issue in zyiou/antrea Jul 2, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

No branches or pull requests

3 participants