Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Operatorhub Catalog ARM64 Support #2823

Closed
darktempla opened this issue Jul 28, 2022 · 4 comments · Fixed by #2890
Closed

Operatorhub Catalog ARM64 Support #2823

darktempla opened this issue Jul 28, 2022 · 4 comments · Fixed by #2890
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@darktempla
Copy link

Bug Report

What did you do?

1. Installed OLM on my raspberry pi k3s cluster (ARM64).

I did have to change the catalog image quay.io/operatorhubio/catalog:latest to quay.io/operatorhubio/catalog:lts. There were no logs output by the pod as you would expect it just wasn't running but switch to LTS tag saw the GRPC server startup and things look healthy.

2. Installed my first operator

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: my-argocd-operator
  namespace: operators
spec:
  channel: alpha
  name: argocd-operator
  source: operatorhubio-catalog
  sourceNamespace: olm

What did you expect to see?

That the operator framework would perform its magic and install argo-cd on the cluster.

What did you see instead? Under which circumstances?

$ kubectl -n operators describe sub my-argocd-operator 

...
Status:
  Catalog Health:
    Catalog Source Ref:
      API Version:       operators.coreos.com/v1alpha1
      Kind:              CatalogSource
      Name:              operatorhubio-catalog
      Namespace:         olm
      Resource Version:  622853
      UID:               aeef1f77-c29d-415c-a1bc-a726372b8ae9
    Healthy:             true
    Last Updated:        2022-07-28T11:09:02Z
  Conditions:
    Last Transition Time:   2022-07-28T11:09:02Z
    Message:                all available catalogsources are healthy
    Reason:                 AllCatalogSourcesHealthy
    Status:                 False
    Type:                   CatalogSourcesUnhealthy
    Last Transition Time:   2022-07-28T11:10:35Z
    Message:                bundle unpacking failed. Reason: BackoffLimitExceeded, and Message: Job has reached the specified backoff limit
    Reason:                 InstallCheckFailed
    Status:                 True
    Type:                   InstallPlanFailed
  Current CSV:              argocd-operator.v0.2.1
  Install Plan Generation:  1
  Install Plan Ref:
    API Version:       operators.coreos.com/v1alpha1
    Kind:              InstallPlan
    Name:              install-ztjh5
    Namespace:         operators
    Resource Version:  625442
    UID:               bbb2ad75-cc7e-41b7-a59a-b368ecf65ac2
  Installplan:
    API Version:  operators.coreos.com/v1alpha1
    Kind:         InstallPlan
    Name:         install-ztjh5
    Uuid:         bbb2ad75-cc7e-41b7-a59a-b368ecf65ac2
  Last Updated:   2022-07-28T11:10:35Z
  State:          UpgradePending
Events:           <none>
$ kubectl -n olm get jobs 
NAME                                                              COMPLETIONS   DURATION   AGE
a9721c26fa1d63e728fa336168752cf75175818883fc3402042eedc3e00b9fa   0/1           40m        40m

$ kubectl -n olm get job a9721c26fa1d63e728fa336168752cf75175818883fc3402042eedc3e00b9fa -o yaml

apiVersion: batch/v1
kind: Job
metadata:
  creationTimestamp: "2022-07-28T11:09:04Z"
  generation: 1
  labels:
    controller-uid: e9330a07-df5d-4a3b-bfd7-0b41475c7957
    job-name: a9721c26fa1d63e728fa336168752cf75175818883fc3402042eedc3e00b9fa
  name: a9721c26fa1d63e728fa336168752cf75175818883fc3402042eedc3e00b9fa
  namespace: olm
  ownerReferences:
  - apiVersion: v1
    blockOwnerDeletion: false
    controller: false
    kind: ConfigMap
    name: a9721c26fa1d63e728fa336168752cf75175818883fc3402042eedc3e00b9fa
    uid: b4865d98-0576-46e9-ae18-3f7f6b9abb5d
  resourceVersion: "625775"
  uid: e9330a07-df5d-4a3b-bfd7-0b41475c7957
spec:
  activeDeadlineSeconds: 600
  backoffLimit: 3
  completionMode: NonIndexed
  completions: 1
  parallelism: 1
  selector:
    matchLabels:
      controller-uid: e9330a07-df5d-4a3b-bfd7-0b41475c7957
  suspend: false
  template:
    metadata:
      creationTimestamp: null
      labels:
        controller-uid: e9330a07-df5d-4a3b-bfd7-0b41475c7957
        job-name: a9721c26fa1d63e728fa336168752cf75175818883fc3402042eedc3e00b9fa
      name: a9721c26fa1d63e728fa336168752cf75175818883fc3402042eedc3e00b9fa
    spec:
      containers:
      - command:
        - opm
        - alpha
        - bundle
        - extract
        - -m
        - /bundle/
        - -n
        - olm
        - -c
        - a9721c26fa1d63e728fa336168752cf75175818883fc3402042eedc3e00b9fa
        - -z
        env:
        - name: CONTAINER_IMAGE
          value: quay.io/operatorhubio/argocd-operator:v0.2.1
        image: quay.io/operator-framework/upstream-opm-builder:latest
        imagePullPolicy: Always
        name: extract
        resources:
          requests:
            cpu: 10m
            memory: 50Mi
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /bundle
          name: bundle
      dnsPolicy: ClusterFirst
      initContainers:
      - command:
        - /bin/cp
        - -Rv
        - /bin/cpb
        - /util/cpb
        image: quay.io/operator-framework/olm:v0.21.2
        imagePullPolicy: IfNotPresent
        name: util
        resources:
          requests:
            cpu: 10m
            memory: 50Mi
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /util
          name: util
      - command:
        - /util/cpb
        - /bundle
        image: quay.io/operatorhubio/argocd-operator:v0.2.1
        imagePullPolicy: Always
        name: pull
        resources:
          requests:
            cpu: 10m
            memory: 50Mi
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /bundle
          name: bundle
        - mountPath: /util
          name: util
      restartPolicy: Never
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      volumes:
      - emptyDir: {}
        name: bundle
      - emptyDir: {}
        name: util
status:
  conditions:
  - lastProbeTime: "2022-07-28T11:10:33Z"
    lastTransitionTime: "2022-07-28T11:10:33Z"
    message: Job has reached the specified backoff limit
    reason: BackoffLimitExceeded
    status: "True"
    type: Failed
  failed: 4
  ready: 0
  startTime: "2022-07-28T11:09:04Z"

Environment

  • operator-lifecycle-manager version:
$ grep image base/olm.yaml

          image: quay.io/operator-framework/olm:v0.21.2
          imagePullPolicy: IfNotPresent
          - --util-image
          image: quay.io/operator-framework/olm:v0.21.2
          imagePullPolicy: IfNotPresent
                image: quay.io/operator-framework/olm:v0.21.2
                imagePullPolicy: Always
  image: quay.io/operatorhubio/catalog:lts
  • Kubernetes version information:
$ kubectl version

Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.1", GitCommit:"632ed300f2c34f6d6d15ca4cef3d3c7073412212", GitTreeState:"clean", BuildDate:"2021-08-19T15:38:26Z", GoVersion:"go1.16.6", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.2+k0s", GitCommit:"f66044f4361b9f1f96f0053dd46cb7dce5e990a8", GitTreeState:"clean", BuildDate:"2022-07-11T06:55:47Z", GoVersion:"go1.18.3", Compiler:"gc", Platform:"linux/arm64"}
  • Kubernetes cluster kind:
v1.24.3+k3s1

Additional context
Already looked at this issue but didn't provide a fix for my specific problem
#1138

@darktempla darktempla added the kind/bug Categorizes issue or PR as related to a bug. label Jul 28, 2022
@dorsegal
Copy link

I am getting the same thing when trying to install RabbitMQ operator. Subscription and operator are created but job failed.
logs from job pod - exec /bin/cp: exec format error
Looks like the job is trying to run on one of my ARM nodes.

@StopMotionCuber
Copy link
Contributor

Having a similar issue with multi-arch (amd64/arm64) cluster. Could pinpoint it down to the quay.io/operator-framework/upstream-opm-builder:latest image being used, which is documented as deprecated.

After digging down deeply into the rabbit hole and finding out that multi-arch builds have actually been implemented upstream (just for another image), it seems like the opm image at https://quay.io/repository/operator-framework/opm is the correct image to use

@awgreene
Copy link
Member

Should be fixed with the v0.24.0 release!

@agelwarg
Copy link

agelwarg commented Mar 15, 2023

Should be fixed with the v0.24.0 release!

@awgreene It appears that the extract container in the pod trying to install an operator is still referencing the upstream-opm-builder image instead of opm as @StopMotionCuber mentions above. However, I don't know what needs to happen for that PR to be accepted and/or if that means another new release.

I've gotten around this for now by manually editing deployment/catalog-operator to override the default opm image:
image

lucamolteni added a commit to lucamolteni/operator that referenced this issue Apr 2, 2024
v0.21.2 fails on Apple Silicon ARM due to this operator-framework/operator-lifecycle-manager#2823 (comment)

A minimum of 0.24.0 is required, I managed to install it on my machine with 0.27.0

Signed-off-by: Luca Molteni <volothamp@gmail.com>
jmontleon pushed a commit to konveyor/operator that referenced this issue Jun 5, 2024
v0.21.2 fails on Apple Silicon ARM due to this
operator-framework/operator-lifecycle-manager#2823 (comment)

A minimum of 0.24.0 is required, I managed to install it on my machine
with 0.27.0

Signed-off-by: Luca Molteni <volothamp@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
5 participants