You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
From this, I'd expect these ports to be totally disabled and not configured. This is backed up by the Helm configuration for this:
In service.yaml
{{- $ports := include "opentelemetry-collector.servicePortsConfig" . }}{{- if $ports }}ports:
{{- $ports | nindent 4}}{{- end }}
In _config.tpl
{{- define "opentelemetry-collector.servicePortsConfig" -}}{{- $ports := deepCopy .Values.ports }}{{- range $key, $port := $ports }}{{- if $port.enabled }}
- name: {{ $key }}port: {{ $port.servicePort }}targetPort: {{ $port.containerPort }}protocol: {{ $port.protocol }}{{- if $port.appProtocol }}appProtocol: {{ $port.appProtocol }}{{- end }}{{- if $port.nodePort }}nodePort: {{ $port.nodePort }}{{- end }}{{- end }}{{- end }}{{- end }}
However, upon running with service.enabled: true, I can see that these disabled ports are still being marked as exposed on the generated Service resource:
The config is being pulled in from my values.yaml as adding new ports still shows up in this list as expected.
The redacted IP addresses in this case are being allocated via my network CNI plugin, thus are being allocated externally-facing IP addresses on my network running my cluster, which is not desired behaviour.
I have the following configuration:
From this, I'd expect these ports to be totally disabled and not configured. This is backed up by the Helm configuration for this:
service.yaml
_config.tpl
However, upon running with
service.enabled: true
, I can see that these disabled ports are still being marked as exposed on the generated Service resource:The config is being pulled in from my values.yaml as adding new ports still shows up in this list as expected.
The redacted IP addresses in this case are being allocated via my network CNI plugin, thus are being allocated externally-facing IP addresses on my network running my cluster, which is not desired behaviour.
The text was updated successfully, but these errors were encountered: