-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] Use E2E test framework #69
[WIP] Use E2E test framework #69
Conversation
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: fabriziopandini The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
this is fantastic @fabriziopandini ! i look forward to taking a deeper review in the new year |
cf4a67a
to
2486dfb
Compare
2486dfb
to
4ce67c8
Compare
made some progress, still figuring out why the autoscaler test does not work |
thanks Fabrizio, i am planning to try out these tests this week |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i think the main issue is just getting the autoscaler configured for the capi clusters properly, the manifests and code looked mostly good to me. i couldn't quite get it working locally, hopefully we can sync up soon.
Image: "busybox", | ||
Resources: corev1.ResourceRequirements{ | ||
Requests: map[corev1.ResourceName]resource.Quantity{ | ||
corev1.ResourceMemory: resource.MustParse("2G"), // TODO: consider if to make this configurable |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is probably fine for the default tests, we just want to synchronize the requests for the workload with the capacity for the kubemark nodes. by default they are 4GB ram, 2 core cpu. this deployment should be able to fit a single replica per node.
@elmiko when you have some time PTAL The autoscaler seems to start, but when I create the test deployment requiring additional capacity it panics...
I'm pretty sure I'm doing something wrong in one of two yaml I'm using to install the autoscaler but I can't catch it.. |
hey @fabriziopandini , i haven't had a chance to follow up here, hoping to take a look next week. |
the configurations look ok to me, i'm not quite sure what is happening here. this line
makes me think that the connection to the management cluster is working fine. it found the crd and is using it. i'm double checking the rbac we use in openshift, there might be some differences. |
@elmiko I found the problem. |
oh wow, good find. thanks for the update @fabriziopandini . tag me on the pr you make in capi, happy to help review. |
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
i am still very curious to see this come to fruition, i have some free hack time coming up, i might try to see if i can continue with the effort. with your blessing @fabriziopandini 🙏 /remove-lifecycle stale |
FYI the code we prototyped in this PR is now merged in CAPI test framework, so the first step will be to pick up the latest CAPI release, and then start using tests from CAPI (hopefully the code of this PR will be hugely simplified) |
ack, thanks @fabriziopandini , i have a pr up to change the cluster-api dep to version 1.4.3 (#86), is that sufficient or will we need the next release? |
Those changes are on main, so we can wait for the first 1.5.0 beta around EOM, pick a commit or ask the release team to cut an alpha tag |
no need to cut an alpha, i don't have a ton of time to hack on kubemark currently, but am trying to cleanup what i can and learn a little more. |
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close |
@k8s-triage-robot: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
What this PR does / why we need it:
While there is already something under hack tests, it would be nice to have a setup that is similar to what we have in CAPI, so we can easily run and debug E2E tests from IDEs
Also, if we move those tests in CAPI, this will allow any provider to quickly setup an autoscaler test
Which issue this PR fixes:
fixes #67
Special notes for your reviewer:
Release notes: