Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Garbage collection step is skipped if there are no manifests in the repo #187

Closed
dinosk opened this issue Nov 27, 2020 · 6 comments · Fixed by #426
Closed

Garbage collection step is skipped if there are no manifests in the repo #187

dinosk opened this issue Nov 27, 2020 · 6 comments · Fixed by #426
Labels
wontfix This will not be worked on

Comments

@dinosk
Copy link

dinosk commented Nov 27, 2020

Noticed in version ghcr.io/fluxcd/kustomize-controller:v0.2.2

Steps to reproduce

  • Added a deployment in a repository which has a Kustomization object with prune: true.
    The deployment is scheduled.
  • Removed the manifest from the repository but the pod wasn't removed, the kustomize-controller logs
... validation failed: error: no objects passed to apply

In another test with two deployment manifests and then deleting one, the pod was removed fine

{"level":"info","ts":"2020-11-27T12:43:41.447Z","logger":"controllers.Kustomization","msg":"Kustomization applied in 344.740099ms","kustomization":"test-2","output":{"deployment.apps/podinfo2":"configured"}}
{"level":"info","ts":"2020-11-27T12:43:41.456Z","logger":"controllers.Kustomization","msg":"garbage collection completed: Deployment/test3/podinfo deleted\n","kustomization":"test-2"}
{"level":"info","ts":"2020-11-27T12:43:41.472Z","logger":"controllers.Kustomization","msg":"Reconciliation finished in 785.636958ms, next run in ..
@stefanprodan
Copy link
Member

We can't apply with kubectl an empty repository, and because apply happens before GC, it will never get to pruning since apply fails.

@dinosk
Copy link
Author

dinosk commented Nov 27, 2020

Got it, thank you for the info! I don't have enough context in the order of reconciliation but could this work by skipping the application step, if validation returns an error, and continuing to prune?
Another approach could be an exit function, which always prunes, before returning the reconciliation result. Happy to try and piece something together if this potentially works

@stefanprodan
Copy link
Member

stefanprodan commented Nov 28, 2020

If the validation fails and we force GC, the it will delete everything due to using the old checksum. Imagine that a typo in a yaml will wipe your cluster clean... I find this unacceptable.

@bboreham
Copy link

Could we advise users to include some innocuous resource in each directory, which they never delete?

@stefanprodan
Copy link
Member

A dummy configmap should do

@eloo
Copy link

eloo commented Jun 30, 2021

Hi, just stumbled across this issue and found this GH issue here.

@stefanprodan can you maybe give an example with the dummy configmap? Will this create a real resource?
Is there any best practice to really clean up resources?

Maybe this limitation should be added to docs with a workaround if its not going to be fixed.

Best regards
eloo

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
wontfix This will not be worked on
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants