Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MULTIARCH-4569: aws: support multi-arch nodes #8698

Merged
merged 3 commits into from
Jul 19, 2024

Conversation

r4f4
Copy link
Contributor

@r4f4 r4f4 commented Jul 3, 2024

This PR adds support for multi-arch nodes (amd64 controlPlane and arm64 compute, or vice-versa). To accomplish that we:

  1. Add a new feature gate to enable the feature: MultiArchInstallAWS
  2. Add support for multi-arch RHCOS images in the Image asset.
  3. Make sure compute nodes use their own architecture for RHCOS image and instance type.

/hold

Depends on:

@openshift-ci-robot openshift-ci-robot added the jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. label Jul 3, 2024
@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Jul 3, 2024

@r4f4: This pull request references MULTIARCH-4569 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.17.0" version, but no target version was set.

In response to this:

This PR adds support for multi-arch nodes (amd64 controlPlane and arm64 compute, or vice-versa). To accomplish that we:

  1. Add a new feature gate to enable the feature: MultiArchInstallAWS
  2. Add support for multi-arch RHCOS images in the Image asset.
  3. Make sure compute nodes use their own architecture for RHCOS image and instance type.

/hold

Depends on:

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci openshift-ci bot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Jul 3, 2024
@r4f4
Copy link
Contributor Author

r4f4 commented Jul 3, 2024

/cc @jeffdyoung

@r4f4
Copy link
Contributor Author

r4f4 commented Jul 3, 2024

Testing this locally with x86 controlPlane nodes and arm64 compute nodes:

$ ./bin/openshift-install coreos print-stream-json | jq '.architectures.x86_64.images.aws.regions."us-east-2"'
{
  "release": "417.94.202407010929-0",
  "image": "ami-0dc8f3a200b9a6b1f"
}

$ ./bin/openshift-install coreos print-stream-json | jq '.architectures.aarch64.images.aws.regions."us-east-2"'
{
  "release": "417.94.202407010929-0",
  "image": "ami-0ad1aa72284192ee2"
}

[root@d52fa708c8aa c]# oc get machines/rdossant-installer-07-h56x8-worker-us-east-2a-b4kdn -n openshift-machine-api -o json | jq '.spec.providerSpec.value | .ami, .instanceType'
{
  "id": "ami-0ad1aa72284192ee2"
}
"m6g.xlarge"

[root@d52fa708c8aa c]# oc get machines/rdossant-installer-07-h56x8-master-0 -n openshift-machine-api -o json | jq '.spec.providerSpec.value | .ami, .instanceType'
{
  "id": "ami-0dc8f3a200b9a6b1f"
}
"m6i.xlarge"

@r4f4
Copy link
Contributor Author

r4f4 commented Jul 3, 2024

Update: finally figured out why some platforms were failing in CI but not in my local tests:

level=info msg=<== capi.Provision rhcos image: &{ }
level=error msg=failed to fetch Cluster: failed to generate asset "Cluster": failed to create cluster: failed during pre-provisioning: failed to use cached vsphere image: parse "": empty url 

When I made the rhcos asset multi-arch aware, I added to it unexported fields. That meant the json serializer was not able to read those values and the asset was serialized as {}. In my local tests, I was running create cluster directly whereas in CI for some platforms the install is done in at least 2 steps: create manifests + create cluster. So in CI the installer was using the loaded value of the asset, which in this case was an empty string, resulting in errors like the one above.

Making the struct fields exported (i.e. Uppercase) should fix the problem.

@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Jul 4, 2024

@r4f4: This pull request references MULTIARCH-4569 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.17.0" version, but no target version was set.

In response to this:

This PR adds support for multi-arch nodes (amd64 controlPlane and arm64 compute, or vice-versa). To accomplish that we:

  1. Add a new feature gate to enable the feature: MultiArchInstallAWS
  2. Add support for multi-arch RHCOS images in the Image asset.
  3. Make sure compute nodes use their own architecture for RHCOS image and instance type.

/hold

Depends on:

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-merge-robot openshift-merge-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jul 7, 2024
@openshift-merge-robot openshift-merge-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jul 7, 2024
@r4f4
Copy link
Contributor Author

r4f4 commented Jul 7, 2024

Update: changed the first commit to point to openshift/api instead of my fork since openshift/api#1947 merged and rebased on top of current master after fixing merge conflicts.

@patrickdillon
Copy link
Contributor

/lgtm

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Jul 8, 2024
@r4f4
Copy link
Contributor Author

r4f4 commented Jul 9, 2024

CI jobs are being added as part of openshift/release#54129. I'll update this PR's description with the link.

@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Jul 9, 2024

@r4f4: This pull request references MULTIARCH-4569 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.17.0" version, but no target version was set.

In response to this:

This PR adds support for multi-arch nodes (amd64 controlPlane and arm64 compute, or vice-versa). To accomplish that we:

  1. Add a new feature gate to enable the feature: MultiArchInstallAWS
  2. Add support for multi-arch RHCOS images in the Image asset.
  3. Make sure compute nodes use their own architecture for RHCOS image and instance type.

/hold

Depends on:

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

This is the first step to implement multi-arch clusters for AWS: allow
users to specify a different architecture for compute nodes when the
`MultiArchInstallAWS` feature gate is enabled.
@openshift-ci openshift-ci bot removed the lgtm Indicates that a PR is ready to be merged. label Jul 10, 2024
@r4f4
Copy link
Contributor Author

r4f4 commented Jul 10, 2024

Update: rebased on top of current master since #8713 merged with the o/api bump.

@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Jul 10, 2024

@r4f4: This pull request references MULTIARCH-4569 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.17.0" version, but no target version was set.

In response to this:

This PR adds support for multi-arch nodes (amd64 controlPlane and arm64 compute, or vice-versa). To accomplish that we:

  1. Add a new feature gate to enable the feature: MultiArchInstallAWS
  2. Add support for multi-arch RHCOS images in the Image asset.
  3. Make sure compute nodes use their own architecture for RHCOS image and instance type.

/hold

Depends on:

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@r4f4
Copy link
Contributor Author

r4f4 commented Jul 10, 2024

/test ?

Copy link
Contributor

openshift-ci bot commented Jul 10, 2024

@r4f4: The following commands are available to trigger required jobs:

  • /test agent-integration-tests
  • /test altinfra-images
  • /test aro-unit
  • /test e2e-agent-compact-ipv4
  • /test e2e-aws-ovn
  • /test e2e-aws-ovn-edge-zones-manifest-validation
  • /test e2e-aws-ovn-upi
  • /test e2e-azure-ovn
  • /test e2e-azure-ovn-upi
  • /test e2e-gcp-ovn
  • /test e2e-gcp-ovn-upi
  • /test e2e-metal-ipi-ovn-ipv6
  • /test e2e-openstack-ovn
  • /test e2e-vsphere-ovn
  • /test e2e-vsphere-ovn-upi
  • /test gofmt
  • /test golint
  • /test govet
  • /test images
  • /test okd-images
  • /test okd-unit
  • /test okd-verify-codegen
  • /test openstack-manifests
  • /test shellcheck
  • /test terraform-images
  • /test terraform-verify-vendor
  • /test tf-lint
  • /test unit
  • /test verify-codegen
  • /test verify-vendor
  • /test yaml-lint

The following commands are available to trigger optional jobs:

  • /test altinfra-e2e-aws-custom-security-groups
  • /test altinfra-e2e-aws-ovn
  • /test altinfra-e2e-aws-ovn-fips
  • /test altinfra-e2e-aws-ovn-imdsv2
  • /test altinfra-e2e-aws-ovn-localzones
  • /test altinfra-e2e-aws-ovn-proxy
  • /test altinfra-e2e-aws-ovn-public-ipv4-pool
  • /test altinfra-e2e-aws-ovn-shared-vpc
  • /test altinfra-e2e-aws-ovn-shared-vpc-local-zones
  • /test altinfra-e2e-aws-ovn-shared-vpc-wavelength-zones
  • /test altinfra-e2e-aws-ovn-single-node
  • /test altinfra-e2e-aws-ovn-wavelengthzones
  • /test altinfra-e2e-azure-capi-ovn
  • /test altinfra-e2e-gcp-capi-ovn
  • /test altinfra-e2e-gcp-ovn-byo-network-capi
  • /test altinfra-e2e-gcp-ovn-secureboot-capi
  • /test altinfra-e2e-gcp-ovn-xpn-capi
  • /test altinfra-e2e-ibmcloud-capi-ovn
  • /test altinfra-e2e-nutanix-capi-ovn
  • /test altinfra-e2e-openstack-capi-ccpmso
  • /test altinfra-e2e-openstack-capi-ccpmso-zone
  • /test altinfra-e2e-openstack-capi-dualstack
  • /test altinfra-e2e-openstack-capi-dualstack-upi
  • /test altinfra-e2e-openstack-capi-dualstack-v6primary
  • /test altinfra-e2e-openstack-capi-externallb
  • /test altinfra-e2e-openstack-capi-nfv-intel
  • /test altinfra-e2e-openstack-capi-ovn
  • /test altinfra-e2e-openstack-capi-proxy
  • /test altinfra-e2e-powervs-capi-ovn
  • /test altinfra-e2e-vsphere-capi-multi-vcenter-ovn
  • /test altinfra-e2e-vsphere-capi-ovn
  • /test altinfra-e2e-vsphere-capi-static-ovn
  • /test altinfra-e2e-vsphere-capi-zones
  • /test azure-ovn-marketplace-images
  • /test e2e-agent-compact-ipv4-appliance-diskimage
  • /test e2e-agent-compact-ipv4-none-platform
  • /test e2e-agent-ha-dualstack
  • /test e2e-agent-sno-ipv4-pxe
  • /test e2e-agent-sno-ipv6
  • /test e2e-aws-overlay-mtu-ovn-1200
  • /test e2e-aws-ovn-edge-zones
  • /test e2e-aws-ovn-fips
  • /test e2e-aws-ovn-heterogeneous
  • /test e2e-aws-ovn-imdsv2
  • /test e2e-aws-ovn-proxy
  • /test e2e-aws-ovn-public-subnets
  • /test e2e-aws-ovn-shared-vpc-custom-security-groups
  • /test e2e-aws-ovn-shared-vpc-edge-zones
  • /test e2e-aws-ovn-single-node
  • /test e2e-aws-ovn-upgrade
  • /test e2e-aws-ovn-workers-rhel8
  • /test e2e-aws-upi-proxy
  • /test e2e-azure-ovn-resourcegroup
  • /test e2e-azure-ovn-shared-vpc
  • /test e2e-azurestack
  • /test e2e-azurestack-upi
  • /test e2e-crc
  • /test e2e-external-aws
  • /test e2e-external-aws-ccm
  • /test e2e-gcp-ovn-byo-vpc
  • /test e2e-gcp-ovn-xpn
  • /test e2e-gcp-secureboot
  • /test e2e-gcp-upgrade
  • /test e2e-gcp-upi-xpn
  • /test e2e-ibmcloud-ovn
  • /test e2e-metal-assisted
  • /test e2e-metal-ipi-ovn
  • /test e2e-metal-ipi-ovn-dualstack
  • /test e2e-metal-ipi-ovn-swapped-hosts
  • /test e2e-metal-ipi-ovn-virtualmedia
  • /test e2e-metal-single-node-live-iso
  • /test e2e-nutanix-ovn
  • /test e2e-openstack-ccpmso
  • /test e2e-openstack-ccpmso-zone
  • /test e2e-openstack-dualstack
  • /test e2e-openstack-dualstack-upi
  • /test e2e-openstack-externallb
  • /test e2e-openstack-nfv-intel
  • /test e2e-openstack-proxy
  • /test e2e-vsphere-ovn-techpreview
  • /test e2e-vsphere-ovn-upi-zones
  • /test e2e-vsphere-ovn-zones
  • /test e2e-vsphere-ovn-zones-techpreview
  • /test e2e-vsphere-static-ovn
  • /test okd-e2e-agent-compact-ipv4
  • /test okd-e2e-agent-ha-dualstack
  • /test okd-e2e-agent-sno-ipv6
  • /test okd-e2e-aws-ovn
  • /test okd-e2e-aws-ovn-upgrade
  • /test okd-e2e-gcp
  • /test okd-e2e-gcp-ovn-upgrade
  • /test okd-e2e-vsphere
  • /test okd-scos-e2e-aws-ovn
  • /test okd-scos-images
  • /test tf-fmt

Use /test all to run the following jobs that were automatically triggered:

  • pull-ci-openshift-installer-master-altinfra-e2e-aws-ovn
  • pull-ci-openshift-installer-master-altinfra-images
  • pull-ci-openshift-installer-master-aro-unit
  • pull-ci-openshift-installer-master-e2e-aws-ovn
  • pull-ci-openshift-installer-master-e2e-aws-ovn-edge-zones
  • pull-ci-openshift-installer-master-e2e-aws-ovn-edge-zones-manifest-validation
  • pull-ci-openshift-installer-master-e2e-aws-ovn-fips
  • pull-ci-openshift-installer-master-e2e-aws-ovn-heterogeneous
  • pull-ci-openshift-installer-master-e2e-aws-ovn-imdsv2
  • pull-ci-openshift-installer-master-e2e-aws-ovn-shared-vpc-custom-security-groups
  • pull-ci-openshift-installer-master-e2e-aws-ovn-shared-vpc-edge-zones
  • pull-ci-openshift-installer-master-e2e-aws-ovn-single-node
  • pull-ci-openshift-installer-master-e2e-external-aws-ccm
  • pull-ci-openshift-installer-master-e2e-nutanix-ovn
  • pull-ci-openshift-installer-master-e2e-openstack-nfv-intel
  • pull-ci-openshift-installer-master-e2e-openstack-ovn
  • pull-ci-openshift-installer-master-e2e-openstack-proxy
  • pull-ci-openshift-installer-master-e2e-vsphere-ovn
  • pull-ci-openshift-installer-master-e2e-vsphere-ovn-techpreview
  • pull-ci-openshift-installer-master-e2e-vsphere-ovn-zones
  • pull-ci-openshift-installer-master-e2e-vsphere-ovn-zones-techpreview
  • pull-ci-openshift-installer-master-gofmt
  • pull-ci-openshift-installer-master-golint
  • pull-ci-openshift-installer-master-govet
  • pull-ci-openshift-installer-master-images
  • pull-ci-openshift-installer-master-okd-unit
  • pull-ci-openshift-installer-master-okd-verify-codegen
  • pull-ci-openshift-installer-master-openstack-manifests
  • pull-ci-openshift-installer-master-shellcheck
  • pull-ci-openshift-installer-master-tf-fmt
  • pull-ci-openshift-installer-master-tf-lint
  • pull-ci-openshift-installer-master-unit
  • pull-ci-openshift-installer-master-verify-codegen
  • pull-ci-openshift-installer-master-verify-vendor
  • pull-ci-openshift-installer-master-yaml-lint

In response to this:

/test ?

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@r4f4
Copy link
Contributor Author

r4f4 commented Jul 10, 2024

A CI job was just added in the release repo. Let's try it out here:

/test e2e-aws-ovn-heterogeneous

@@ -52,101 +55,107 @@ func (i *Image) Dependencies() []asset.Asset {
func (i *Image) Generate(ctx context.Context, p asset.Parents) error {
if oi, ok := os.LookupEnv("OPENSHIFT_INSTALL_OS_IMAGE_OVERRIDE"); ok && oi != "" {
logrus.Warn("Found override for OS Image. Please be warned, this is not advised")
*i = Image(oi)
*i = *MakeAsset(oi)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so in the override case - the same image would be used for cp/computes?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. If we need to support 2 image overrides, then we'd have to either require the OPENSHIFT_INSTALL_OS_IMAGE to have both images (e.g, separated by ,) or we'd have to introduce a new env var for compute images.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe a change to the installer doc mentioning this is also needed

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't have docs for that env var as it's not officially supported.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Prashanth684 can you envision a situation where we'd need to override both controlplane and compute images with distinct values for multi-arch clusters? I'd rather not touch this env var as it's used by agent and powervs (at least not in this PR).

Copy link
Contributor

@Prashanth684 Prashanth684 Jul 11, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd rather not touch this env var as it's used by agent and powervs

it makes sense to address separately. Do they use it for installation - if so we have to make sure that it works or that we don't allow them to provision a cluster with multi-arch computes on day 0?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

agent: used when generating the base ISO: https://github.com/openshift/installer/blob/master/pkg/asset/agent/image/baseiso.go#L149-L151

powervs: it seems it's abusing the envvar: https://github.com/openshift/installer/blob/master/pkg/asset/installconfig/powervs/platform.go#L19-L24

we don't allow them to provision a cluster with multi-arch computes on day 0

The only way to bypass the heterogeneous validation in the installer is by setting the feature gate: d9e5e66

That only happens for AWS with this PR.

}
switch config.Platform.Name() {
case aws.Name:
if len(config.Platform.AWS.AMIID) > 0 {
return config.Platform.AWS.AMIID, nil
if ami := config.Platform.AWS.AMIID; len(ami) > 0 {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we could just call osimage twice in the generate rather than do this - and this is just returning the image matching the cp arch which will not work if the compute arch is different.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

THB I don't even know why we have amiID at the platform level if we can specify it for controlPlane/compute/defaultMachinePlatform. If someone specifies this amiID for nodes with different arches, it's more of user error than anything.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe it's like an optimization so they don't have to enter it twice :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@patrickdillon Do you know what's the purpose of amiID here?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@patrickdillon Do you know what's the purpose of amiID here?

We probably need to do some archaeology to confirm, but I think this was the initial, non-machinepool implementation for custom AMI and was replaced by machinepools so it should be deprecated.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks. I'll create a card for that.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

r4f4 added 2 commits July 11, 2024 20:46
For some platforms, we will need to be able to get different RHCOS
images based on the architecture of the nodes. Currently it's assumed
that the same image is used for all nodes.
Use different instance types based on the node's architecture.
@r4f4
Copy link
Contributor Author

r4f4 commented Jul 11, 2024

Update: addressed most of @Prashanth684's comments.

@r4f4
Copy link
Contributor Author

r4f4 commented Jul 11, 2024

e2e-aws-ovn-heterogeneous: the 2 e2e failures:

  • [sig-builds][Feature:Builds] oc new-app should succeed with a --name of 58 characters [apigroup:build.openshift.io] [Skipped:Disconnected] [Skipped:Proxy] [Suite:openshift/conformance/parallel]
  • [sig-builds][Feature:Builds] build can reference a cluster service with a build being created from new-build should be able to run a build that references a cluster service [apigroup:build.openshift.io] [Skipped:Disconnected] [Skipped:Proxy] [Suite:openshift/conformance/parallel]

are expected and we are going to skip them until https://issues.redhat.com/browse/MULTIARCH-4552 is complete.

Copy link
Contributor

openshift-ci bot commented Jul 11, 2024

@r4f4: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/e2e-external-aws-ccm 0847771 link false /test e2e-external-aws-ccm
ci/prow/e2e-aws-ovn-shared-vpc-custom-security-groups 0847771 link false /test e2e-aws-ovn-shared-vpc-custom-security-groups
ci/prow/e2e-openstack-proxy 0847771 link false /test e2e-openstack-proxy
ci/prow/e2e-aws-ovn-heterogeneous 0847771 link false /test e2e-aws-ovn-heterogeneous
ci/prow/e2e-openstack-nfv-intel 0847771 link false /test e2e-openstack-nfv-intel

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@Prashanth684
Copy link
Contributor

/lgtm

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Jul 15, 2024
@r4f4
Copy link
Contributor Author

r4f4 commented Jul 16, 2024

/label acknowledge-critical-fixes-only
Behind a feature gate.

@openshift-ci openshift-ci bot added the acknowledge-critical-fixes-only Indicates if the issuer of the label is OK with the policy. label Jul 16, 2024
@r4f4
Copy link
Contributor Author

r4f4 commented Jul 16, 2024

/label platform/aws

@r4f4
Copy link
Contributor Author

r4f4 commented Jul 16, 2024

@sadasu your comments should all be addressed now. Can you take another look?

@patrickdillon
Copy link
Contributor

/approve

Copy link
Contributor

openshift-ci bot commented Jul 19, 2024

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: patrickdillon

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jul 19, 2024
@openshift-merge-bot openshift-merge-bot bot merged commit 8b80710 into openshift:master Jul 19, 2024
31 of 36 checks passed
@openshift-bot
Copy link
Contributor

[ART PR BUILD NOTIFIER]

Distgit: ose-installer-altinfra
This PR has been included in build ose-installer-altinfra-container-v4.17.0-202407192341.p0.g8b80710.assembly.stream.el9.
All builds following this will include this PR.

@openshift-bot
Copy link
Contributor

[ART PR BUILD NOTIFIER]

Distgit: ose-baremetal-installer
This PR has been included in build ose-baremetal-installer-container-v4.17.0-202407192341.p0.g8b80710.assembly.stream.el9.
All builds following this will include this PR.

@openshift-bot
Copy link
Contributor

[ART PR BUILD NOTIFIER]

Distgit: ose-installer-terraform-providers
This PR has been included in build ose-installer-terraform-providers-container-v4.17.0-202407192341.p0.g8b80710.assembly.stream.el9.
All builds following this will include this PR.

@openshift-bot
Copy link
Contributor

[ART PR BUILD NOTIFIER]

Distgit: ose-installer-artifacts
This PR has been included in build ose-installer-artifacts-container-v4.17.0-202407192341.p0.g8b80710.assembly.stream.el9.
All builds following this will include this PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
acknowledge-critical-fixes-only Indicates if the issuer of the label is OK with the policy. approved Indicates a PR has been approved by an approver from all required OWNERS files. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. lgtm Indicates that a PR is ready to be merged. platform/aws
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants