Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

backend/cmd/headlamp: Fix issue in cluster looking for kube config #2323

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

illume
Copy link
Collaborator

@illume illume commented Sep 11, 2024

Fixes #1826

I'm not sure what's best? The first commit disables it.

There is a use case of using kube config in cluster here: #1826 (comment)

But are the other statements needed in-cluster too?

I guess if we do support kube config loading in cluster then they should be warnings and not error logs?

Fixes #1826

Signed-off-by: René Dudfield <renedudfield@microsoft.com>
@illume illume added bug Something isn't working backend Issues related to the backend labels Sep 11, 2024
@illume illume marked this pull request as draft September 11, 2024 13:11
Copy link

Backend Code coverage changed from 60.2% to 60.3%. Change: .1% 😃.

Signed-off-by: René Dudfield <renedudfield@microsoft.com>
@illume illume marked this pull request as ready for review September 11, 2024 13:35
Copy link

Backend Code coverage changed from 60.1% to 60.3%. Change: .2% 😃.

Copy link
Collaborator

@joaquimrocha joaquimrocha left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left a comment in the 1st patch that then I realize you had already implemented in the 2nd patch.
I think the 2nd patch is the right approach. Let's keep the possibility of mixing loading clusters from different places, but I don't think we need the new todo comments.

if err != nil {
logger.Log(logger.LevelError, nil, err, "loading kubeconfig")
}
if !config.useInCluster {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Many users have asked how they could add more clusters when headlamp is running in a cluster. I find the mixed use-case a bit cornerish, but don't see a reason why we should restrict mixing loading kubeconfigs with in-cluster. So maybe what we need to do is not log issues about loading kubeconfigs if we are running in-cluster.

err := kubeconfig.LoadAndStoreKubeConfigs(config.kubeConfigStore, kubeConfigPath, kubeconfig.KubeConfig)
if err != nil {
if config.useInCluster {
logger.Log(logger.LevelInfo, nil, err, "loading kubeconfig")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe we could define the log into a variable and then only have one logger.Log call? I guess the code would be more easily understood.

@illume
Copy link
Collaborator Author

illume commented Sep 16, 2024

There's a bit more discussion since in this issue:

There is a use case of using kube config in cluster here: #1826 (comment)

@joaquimrocha
Copy link
Collaborator

There's a bit more discussion since in this issue:

There is a use case of using kube config in cluster here: #1826 (comment)

Either way, I think we need to keep supporting this. By default we don't look for a kubeconfig if -in-cluster is passed, so I don't see an inconvenient with keeping support for this "misfeature".
In terms of making it work, just not aborting if -in-cluster is also used should be sufficient.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backend Issues related to the backend bug Something isn't working
Projects
Status: In Progress
Development

Successfully merging this pull request may close these issues.

In-cluster deployment looks for a kubeconfig
2 participants