Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Port the kubevirtCI bash code into the go cli #1185

Open
aerosouund opened this issue May 5, 2024 · 5 comments
Open

Port the kubevirtCI bash code into the go cli #1185

aerosouund opened this issue May 5, 2024 · 5 comments
Assignees

Comments

@aerosouund
Copy link
Member

Is your feature request related to a problem? Please describe:

To port over the bash code of go cli into reusable and extensible go code.
This issue exists to be a centralized discussion place for the different topics that need discussed related to the project. and as an umbrella issue to avoid the creation of ones with similar goals

Describe the solution you'd like:

The removal of duplicated bash code in the repo and the existence of a structure that allows us to transform the scripts one by one into go. as well as removing the wrappers required for the provision and run commands of the gocli.

Additional context:

Related issues:

@aerosouund aerosouund changed the title Port the kubevirtCI bahs code into the go cli Port the kubevirtCI bash code into the go cli May 5, 2024
@aerosouund
Copy link
Member Author

aerosouund commented May 5, 2024

@xpivarc @alicefr @acardace

So over the past few days i have been thinking of what questions do we need to explore to begin with this ?
Just to confirm, we are on the same page about how the individual files in each provisioner are going to be transformed each (or some of them) to a mini cli and this is the first step we agreed to execute and investigate
some of the questions that come to mind and some of the potential suggestions i think for them is.

1- Which files will be transformed to go during the coding period ?

eventually the goal is to do it for all, however during the coding period i believe we should set a few ones to be the absolute essentials that will guide how the rest is rewritten later. as its more important to create the skeleton of how this refactor should look like and move the files one by one after this skeleton has been established. for that i recommend provision.sh and provision-k8s.sh and a single optional feature file for which i nominate istio (we can agree to pick another one)

2- The execution architecture

During our call i thought that the suggestion of having the gocli signal commands to be run on the node container via a webserver is a very nice idea. in which the node container will come prebuilt with all the mini clis we created (or bash scripts for the files that will be rewritten later) and these get run via a request made to the node container from the cli. in case we agree that this is the idea we will go with then agreeing on what the api looks like and the type of it (rpc, rest) and how its served (tcp, unix) is the next most important step

3- The build process for the container and the clis

I think the easiest and least complicated way is to have all the clis (and the webserver) being built as a part of the node container build script and all of them existing in it by default. but this takes away the power to edit a single one of them and having it copied to the container during execution via scp. if this is important to us then we need something more nuanced that will account for the difference between the different operating systems and cpu architectures the cli is being built vs run on.
In my opinion its a tradeoff between flexibility and complexity

What i recommend as a few starting steps:

  • transforming provision.sh into a go cli. because its a step that has no big questions associated with it.
  • If we agree on the webserver idea then the next thing is to discuss how this server should look like and implement it

@aerosouund
Copy link
Member Author

/assign

@kubevirt-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@kubevirt-bot kubevirt-bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 3, 2024
@kubevirt-bot
Copy link
Contributor

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

/lifecycle rotten

@kubevirt-bot kubevirt-bot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Sep 2, 2024
@aerosouund
Copy link
Member Author

/remove-lifecycle rotten

@kubevirt-bot kubevirt-bot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Sep 2, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants