-
Notifications
You must be signed in to change notification settings - Fork 138
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Validate collisions of logmodules and OneAgents in the validation webhook #3801
base: main
Are you sure you want to change the base?
Conversation
Codecov ReportAttention: Patch coverage is
❗ Your organization needs to install the Codecov GitHub app to enable full functionality. Additional details and impacted files@@ Coverage Diff @@
## main #3801 +/- ##
=======================================
Coverage ? 65.02%
=======================================
Files ? 398
Lines ? 21469
Branches ? 0
=======================================
Hits ? 13961
Misses ? 6170
Partials ? 1338
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
other than @waodim 's comment, it works and looks "as ok as it can" considering the need to validate for each old version as well
IMO, we should rethink how we validate when multiple versions are used, as this can get out of hand quickly. WDYT?
My testing setup:
apiVersion: dynatrace.com/v1beta3
kind: DynaKube
metadata:
name: zib50933zib50933zib50933zib50933zib50933
namespace: dynatrace
spec:
apiUrl: https://zib50933.dev.dynatracelabs.com/api
tokens: zib50933
dynatraceApiRequestThreshold: 15
logModule:
enabled: true
templates:
logModule:
nodeSelector:
dk-log: deploy
imageRef:
repository: us-central1-docker.pkg.dev/cloud-platform-207208/chmu/logmodule
tag: latest
---
apiVersion: dynatrace.com/v1beta2
kind: DynaKube
metadata:
name: zib
namespace: dynatrace
spec:
apiUrl: https://zib50933.dev.dynatracelabs.com/api
tokens: zib50933
enableIstio: true
dynatraceApiRequestThreshold: 15
oneAgent:
cloudNativeFullStack:
namespaceSelector:
matchLabels:
monitored/by: v2
nodeSelector:
dk-v2: deploy
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
operator: Exists
- effect: NoSchedule
key: node-role.kubernetes.io/control-plane
operator: Exists
---
apiVersion: dynatrace.com/v1beta1
kind: DynaKube
metadata:
name: zib50933-old
namespace: dynatrace
spec:
apiUrl: https://zib50933.dev.dynatracelabs.com/api
tokens: zib50933
namespaceSelector:
matchLabels:
monitored/by: v1
oneAgent:
cloudNativeFullStack:
nodeSelector:
dk-v1: deploy
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
operator: Exists
- effect: NoSchedule
key: node-role.kubernetes.io/control-plane
operator: Exists
tried different order of application, and if no nodeSelector
is specified, the DynaKube gets rejected as expected
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
code lgtm, and it also worked for me, but I agree with @0sewa0 that IF possible we should think about other ways of validation without having to touch every api version's validation when adding something.
errorNodeSelectorConflict = `The DynaKube's specification tries to specify a nodeSelector conflicts with an another Dynakube's nodeSelector, which is not supported. | ||
The conflicting Dynakube: %s | ||
` | ||
errorNodeSelectorConflict = `The DynaKube specification attempts to deploy a %s, which conflicts with another DynaKube's %s. Only one Agent per node is supported. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this does not give the information anymore, that the nodeSelector
is the problem.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That was kind of on purpose, because the nodeSelector
is not really true. It also pops up if no nodeSelectors
are specified and 2 dynakubes are colliding.
So I would rather state that the dynakubes are colliding and which type of agent is the problem. So WDYF, should we still change it back? 🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
only way to fix it is by specifying a node selector, so that should be in the message somewhere
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated
for _, item := range validDynakubes.Items { | ||
if item.Name == dk.Name { | ||
continue | ||
} | ||
|
||
if item.NeedsOneAgent() { | ||
if hasConflictingMatchLabels(dk.LogModuleNodeSelector(), item.OneAgentNodeSelector()) { | ||
log.Info("requested dynakube has conflicting LogModule nodeSelector", "name", dk.Name, "namespace", dk.Namespace) | ||
|
||
return fmt.Sprintf(errorConflictingLogModule, item.Name) | ||
} | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
here you could also collect all the conflicting DynaKubes 😅
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just make sure to only list the conflicting dynakubes. if we output the full error for each dynakube we could easily hit the warning limit
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't need that, as there can only be 1 conflicting dynakube.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I could be many, but it's not very realistic.
Example
apiVersion: dynatrace.com/v1beta2
kind: DynaKube
metadata:
name: cloudnative
namespace: dynatrace
spec:
apiUrl: https://zib50933.dev.dynatracelabs.com/api
tokens: zib50933
enableIstio: true
dynatraceApiRequestThreshold: 15
oneAgent:
cloudNativeFullStack:
namespaceSelector:
matchLabels:
monitored/by: v2
nodeSelector:
dk-v2: deploy
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
operator: Exists
- effect: NoSchedule
key: node-role.kubernetes.io/control-plane
operator: Exists
---
apiVersion: dynatrace.com/v1beta1
kind: DynaKube
metadata:
name: cloudnative-old
namespace: dynatrace
spec:
apiUrl: https://zib50933.dev.dynatracelabs.com/api
tokens: zib50933
namespaceSelector:
matchLabels:
monitored/by: v1
oneAgent:
cloudNativeFullStack:
nodeSelector:
dk-v1: deploy
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
operator: Exists
- effect: NoSchedule
key: node-role.kubernetes.io/control-plane
operator: Exists
---
apiVersion: dynatrace.com/v1beta3
kind: DynaKube
metadata:
name: log-module
namespace: dynatrace
spec:
apiUrl: https://zib50933.dev.dynatracelabs.com/api
tokens: zib50933
dynatraceApiRequestThreshold: 15
logModule:
enabled: true
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
😮 - That's good, an edge case I didn't think of, will adapt the code and add it as unit test. Thx
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
Description
It is not possible to deploy the logmodule on the same node that the OneAgent.
To prevent misconfigurations we should add a validation to our validation webhook.
Jira
How can this be tested?