Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add flag to enable controller cached client #675

Merged
merged 5 commits into from
May 26, 2021

Conversation

djwhatle
Copy link
Contributor

Description

Complement to migtools/mig-controller#1037

Add the switch

mig_controller_enable_cache: true|false

@djwhatle
Copy link
Contributor Author

@jmontleon @rayfordj this failing test seems like a fluke to me, is someone able to interpret why this failed?

@djwhatle djwhatle changed the title Add switch to enable controller cached client Add flag to enable controller cached client May 20, 2021
@jmontleon
Copy link
Collaborator

/test operator-e2e

2 similar comments
@jmontleon
Copy link
Collaborator

/test operator-e2e

@jmontleon
Copy link
Collaborator

/test operator-e2e

@jmontleon
Copy link
Collaborator

@alaypatel07 more strange errors...

@alaypatel07
Copy link
Contributor

 TASK [Set up mig controller] ******************************** 
�[0;31mfatal: [localhost]: FAILED! => {"msg": "An unhandled exception occurred while running the lookup plugin 'template'. Error was a <class 'ansible.errors.AnsibleError'>, original message: template error while templating string: no test named 'true'. String: ---\napiVersion: v1\nkind: Service\nmetadata:\n  labels:\n    control-plane: controller-manager\n    controller-tools.k8s.io: \"1.0\"\n  name: controller-manager-service\n  namespace: {{ mig_namespace }}\nspec:\n  ports:\n  - port: 443\n  selector:\n    control-plane: controller-manager\n    controller-tools.k8s.io: \"1.0\"\n---\n{% if lookup('k8s', cluster_info='version').kubernetes.minor|replace('+', '')|int < 9 %}\napiVersion: apps/v1beta1\n{% else %}\napiVersion: apps/v1\n{% endif %}\nkind: Deployment\nmetadata:\n  labels:\n    app: migration\n    control-plane: controller-manager\n    controller-tools.k8s.io: \"1.0\"\n    app.kubernetes.io/part-of: openshift-migration\n{% if jaeger_enabled|bool %}\n  annotations:\n    sidecar.jaegertracing.io/inject: \"true\"\n{% endif %}\n  name: migration-controller\n  namespace: {{ mig_namespace }}\nspec:\n  selector:\n    matchLabels:\n      app: migration\n      control-plane: controller-manager\n      controller-tools.k8s.io: \"1.0\"\n  serviceName: controller-manager-service\n  template:\n    metadata:\n      labels:\n        app: migration\n        control-plane: controller-manager\n        app.kubernetes.io/part-of: openshift-migration\n        controller-tools.k8s.io: \"1.0\"\n        controller_config_name: {{ controller_config_configmap.env | k8s_config_resource_name }}\n        cluster_config_name: {{ cluster_config_configmap.env | k8s_config_resource_name }}\n        webhook_secret_name: {{ webhook_secret.env | k8s_config_resource_name }}\n    spec:\n      serviceAccountName: migration-controller\n      containers:\n      - command:\n        - /manager\n        env:\n        - name: EXCLUDED_RESOURCES\n          value: {{ all_excluded_resources | join(',') }}\n{% if mig_pv_move_storageclasses|length >0 %}\n        - name: PV_MOVE_STORAGECLASSES\n          value: {{ mig_pv_move_storageclasses | join(',') }}\n{% endif %}\n        - name: POD_NAMESPACE\n          valueFrom:\n            fieldRef:\n              fieldPath: metadata.namespace\n        - name: ROLE\n          value: mtc\n        - name: SECRET_NAME\n          value: webhook-server-secret\n        - name: MIGRATION_REGISTRY_IMAGE\n          value: {{ migration_registry_image_fqin }}\n{% if mig_controller_enable_cache|bool is true %}\n        - name: ENABLE_CACHED_CLIENT\n          value: \"true\"\n{% endif %}\n{% if http_proxy|length >0 %}\n        - name: HTTP_PROXY\n          value: {{ http_proxy }}\n{% endif %}\n{% if https_proxy|length >0 %}\n        - name: HTTPS_PROXY\n          value: {{ https_proxy }}\n{% endif %}\n{% if no_proxy|length >0 %}\n        - name: NO_PROXY\n          value: {{ no_proxy }}\n{% endif %}\n        envFrom:\n        - configMapRef:\n            name: migration-controller\n        image: {{ mig_controller_image_fqin }}\n        imagePullPolicy: \"{{ image_pull_policy }}\"\n        name: mtc\n        ports:\n        - containerPort: 9876\n          name: webhook-server\n          protocol: TCP\n        resources:\n          limits:\n            cpu: {{ mig_controller_limits_cpu }}\n            memory: {{ mig_controller_limits_memory }}\n          requests:\n            cpu: {{ mig_controller_requests_cpu }}\n            memory: {{ mig_controller_requests_memory }}\n        volumeMounts:\n        - mountPath: /tmp/cert\n          name: cert\n          readOnly: true\n      - command:\n        - /manager\n        env:\n        - name: POD_NAMESPACE\n          valueFrom:\n            fieldRef:\n              fieldPath: metadata.namespace\n        - name: ROLE\n          value: discovery\n        - name: SECRET_NAME\n          value: webhook-server-secret\n{% if http_proxy|length >0 %}\n        - name: HTTP_PROXY\n          value: {{ http_proxy }}\n{% endif %}\n{% if https_proxy|length >0 %}\n        - name: HTTPS_PROXY\n          value: {{ https_proxy }}\n{% endif %}\n{% if no_proxy|length >0 %}\n        - name: NO_PROXY\n          value: {{ no_proxy }}\n{% endif %}\n        envFrom:\n        - configMapRef:\n            name: migration-controller\n        image: {{ mig_controller_image_fqin }}\n        imagePullPolicy: \"{{ image_pull_policy }}\"\n        name: discovery\n        ports:\n        - name: api\n          containerPort: 8080\n          name: webhook-server\n          protocol: TCP\n        resources:\n          limits:\n            cpu: {{ mig_controller_limits_cpu }}\n            memory: {{ mig_controller_limits_memory }}\n          requests:\n            cpu: {{ mig_controller_requests_cpu }}\n            memory: {{ mig_controller_requests_memory }}\n        volumeMounts:\n        - mountPath: {{ discovery_volume_path }}\n          name: discovery\n      terminationGracePeriodSeconds: 10\n      volumes:\n      - name: cert\n        secret:\n          defaultMode: 420\n          secretName: webhook-server-secret\n      - name: discovery\n        emptyDir: {}\n"}�[0m

same for this. this is very weird, the template does not seem to have problems.

https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/origin-ci-test/pr-logs/pull/konveyor_mig-operator/675/pull-ci-konveyor-mig-operator-master-operator-e2e/1395842958757990400/artifacts/operator-e2e/gather-extra/artifacts/pods/openshift-migration_migration-operator-7fc96f478f-h5qhx_operator.log

@@ -71,6 +71,10 @@ spec:
value: webhook-server-secret
- name: MIGRATION_REGISTRY_IMAGE
value: {{ migration_registry_image_fqin }}
{% if mig_controller_enable_cache|bool is true %}
Copy link
Contributor

@shawn-hurley shawn-hurley May 24, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jmontleon @djwhatle

I think this is the thing that is causing the error here:

https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/origin-ci-test/pr-logs/pull/konveyor_mig-operator/675/pull-ci-konveyor-mig-operator-master-operator-e2e/1395842958757990400/artifacts/operator-e2e/gather-extra/artifacts/pods/openshift-migration_migration-operator-7fc96f478f-h5qhx_operator.log

TASK [Set up mig controller] ********************************
�[0;31mfatal: [localhost]: FAILED! => {"msg": "An unhandled exception occurred while running the lookup plugin 'template'. Error was a <class 'ansible.errors.AnsibleError'>, original message: template error while templating string: no test named 'true'. String: ---\napiVersion: v1\nkind: Service\nmetadata:\n labels:\n control-plane: controller-manager\n controller-tools.k8s.io: "1.0"\n name: controller-manager-service\n namespace: {{ mig_namespace }}\nspec:\n ports:\n - port: 443\n selector:\n control-plane: controller-manager\n controller-tools.k8s.io: "1.0"\n---\n{% if lookup('k8s', cluster_info='version').kubernetes.minor|replace('+', '')|int < 9 %}\napiVersion: apps/v1beta1\n{% else %}\napiVersion: apps/v1\n{% endif %}\nkind: Deployment\nmetadata:\n labels:\n app: migration\n control-plane: controller-manager\n controller-tools.k8s.io: "1.0"\n app.kubernetes.io/part-of: openshift-migration\n{% if jaeger_enabled|bool %}\n annotations:\n sidecar.jaegertracing.io/inject: "true"\n{% endif %}\n name: migration-controller\n namespace: {{ mig_namespace }}\nspec:\n selector:\n matchLabels:\n app: migration\n control-plane: controller-manager\n controller-tools.k8s.io: "1.0"\n serviceName: controller-manager-service\n template:\n metadata:\n labels:\n app: migration\n control-plane: controller-manager\n app.kubernetes.io/part-of: openshift-migration\n controller-tools.k8s.io: "1.0"\n controller_config_name: {{ controller_config_configmap.env | k8s_config_resource_name }}\n cluster_config_name: {{ cluster_config_configmap.env | k8s_config_resource_name }}\n webhook_secret_name: {{ webhook_secret.env | k8s_config_resource_name }}\n spec:\n serviceAccountName: migration-controller\n containers:\n - command:\n - /manager\n env:\n - name: EXCLUDED_RESOURCES\n value: {{ all_excluded_resources | join(',') }}\n{% if mig_pv_move_storageclasses|length >0 %}\n - name: PV_MOVE_STORAGECLASSES\n value: {{ mig_pv_move_storageclasses | join(',') }}\n{% endif %}\n - name: POD_NAMESPACE\n valueFrom:\n fieldRef:\n fieldPath: metadata.namespace\n - name: ROLE\n value: mtc\n - name: SECRET_NAME\n value: webhook-server-secret\n - name: MIGRATION_REGISTRY_IMAGE\n value: {{ migration_registry_image_fqin }}\n{% if mig_controller_enable_cache|bool is true %}\n - name: ENABLE_CACHED_CLIENT\n value: "true"\n{% endif %}\n{% if http_proxy|length >0 %}\n - name: HTTP_PROXY\n value: {{ http_proxy }}\n{% endif %}\n{% if https_proxy|length >0 %}\n - name: HTTPS_PROXY\n value: {{ https_proxy }}\n{% endif %}\n{% if no_proxy|length >0 %}\n - name: NO_PROXY\n value: {{ no_proxy }}\n{% endif %}\n envFrom:\n - configMapRef:\n name: migration-controller\n image: {{ mig_controller_image_fqin }}\n imagePullPolicy: "{{ image_pull_policy }}"\n name: mtc\n ports:\n - containerPort: 9876\n name: webhook-server\n protocol: TCP\n resources:\n limits:\n cpu: {{ mig_controller_limits_cpu }}\n memory: {{ mig_controller_limits_memory }}\n requests:\n cpu: {{ mig_controller_requests_cpu }}\n memory: {{ mig_controller_requests_memory }}\n volumeMounts:\n - mountPath: /tmp/cert\n name: cert\n readOnly: true\n - command:\n - /manager\n env:\n - name: POD_NAMESPACE\n valueFrom:\n fieldRef:\n fieldPath: metadata.namespace\n - name: ROLE\n value: discovery\n - name: SECRET_NAME\n value: webhook-server-secret\n{% if http_proxy|length >0 %}\n - name: HTTP_PROXY\n value: {{ http_proxy }}\n{% endif %}\n{% if https_proxy|length >0 %}\n - name: HTTPS_PROXY\n value: {{ https_proxy }}\n{% endif %}\n{% if no_proxy|length >0 %}\n - name: NO_PROXY\n value: {{ no_proxy }}\n{% endif %}\n envFrom:\n - configMapRef:\n name: migration-controller\n image: {{ mig_controller_image_fqin }}\n imagePullPolicy: "{{ image_pull_policy }}"\n name: discovery\n ports:\n - name: api\n containerPort: 8080\n name: webhook-server\n protocol: TCP\n resources:\n limits:\n cpu: {{ mig_controller_limits_cpu }}\n memory: {{ mig_controller_limits_memory }}\n requests:\n cpu: {{ mig_controller_requests_cpu }}\n memory: {{ mig_controller_requests_memory }}\n volumeMounts:\n - mountPath: {{ discovery_volume_path }}\n name: discovery\n terminationGracePeriodSeconds: 10\n volumes:\n - name: cert\n secret:\n defaultMode: 420\n secretName: webhook-server-secret\n - name: discovery\n emptyDir: {}\n"}�[0m

Copy link
Collaborator

@jmontleon jmontleon May 24, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. is true and is false are builtin tests that were not added to Jinja2 until 2.11 and the RPM on ubi8 is 2.10.x. You need to use comparison operators like ==, !=, etc. to satisfy downstream.

It works upstream because ansible-operator does a pip install upstream to install. This gets a more current version of Jinja2 than is available via RPM on ubi8.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@djwhatle
Copy link
Contributor Author

/retest

@rayfordj
Copy link
Contributor

/test operator-e2e

Copy link
Contributor

@rayfordj rayfordj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

Still not clear why tests continue to give us grief ... 🤷‍♂️

@rayfordj
Copy link
Contributor

/retest

Copy link
Collaborator

@jmontleon jmontleon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ACK

@djwhatle
Copy link
Contributor Author

Resolved conflicts.

@jmontleon jmontleon merged commit 48aa892 into migtools:master May 26, 2021
@djwhatle djwhatle changed the title Add flag to enable controller cached client [MIG-699] Add flag to enable controller cached client Jun 2, 2021
@djwhatle djwhatle changed the title [MIG-699] Add flag to enable controller cached client Add flag to enable controller cached client Jun 2, 2021
@djwhatle djwhatle linked an issue Jun 2, 2021 that may be closed by this pull request
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[MIG-699] Add flag to enable cached client
6 participants