Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Removing legacy multitenancy #82020

Closed
11 of 14 tasks
kobelb opened this issue Oct 29, 2020 · 41 comments
Closed
11 of 14 tasks

Removing legacy multitenancy #82020

kobelb opened this issue Oct 29, 2020 · 41 comments

Comments

@kobelb
Copy link
Contributor

kobelb commented Oct 29, 2020

Summary

Users have historically been able to change the kibana.index setting in their kibana.yml to implement what will henceforth be referred to as "legacy multitenancy". This allowed users to have multiple different instances of Kibana using the same Elasticsearch cluster, but with isolation between all of the data that is stored in the kibana.index. This approach to multitenancy has been generally fraught with problems and has introduced considerable complexity to Kibana. With the implementation of Spaces, we no longer need to rely on the legacy method of multitenancy, as we have a first-class method of implementing multitenancy.

As such, starting in 8.0, we will be removing the ability to configure the following settings that were used to implement legacy multitenancy:

  • kibana.index
  • xpack.reporting.index
  • xpack.task_manager.index

During 7.x, these settings will be deprecated, and users will be warned that they won't be able to configure them any longer starting in 8.0. Users will be encouraged to migrate to Spaces or use CCS/CCS with separate Elasticsearch clusters. As part of this effort, we will be ensuring that users have a clear path to migrate from a legacy multitenant instance to Spaces.

Alternatives to legacy multitenancy

Spaces

Spaces allow users to segment their saved-objects and grant users access to different Spaces. One of the common uses of Spaces is to implement multitenancy, where multiple groups of users are able to share an instance of Kibana with isolation between the groups. When Spaces was first implemented, users were encouraged to use saved-object import/export to move their saved-objects from a tenant to Spaces. A number of users have successfully completed the migration from legacy multitenancy to Spaces, and we've seen the adoption of legacy multitenancy decline since the implementation of Spaces.

Migrating to Spaces

Using saved-object management, a user is able to export all of the saved-objects from a legacy multitenant instance to a Space in the default tenant. However, there are currently some Kibana entities that can't be exported and imported using saved-object management. If we're going to no longer allow users to utilize legacy multitenancy, we should provide them a method of transitioning to Spaces, and as such, we'll need to ensure that users have a clear migration path.

Common issues with saved-object import/export integration

  1. Saved-object aren't importable/exportable, but there are no blockers from changing this
  2. Saved-object references aren't being used
  3. Saved-object references used to model a "composition" relationship, where the referenced entity should be included automatically in exports and not treated as a separate entity. Saved-object references used to model a "composition" relationship #82064
  4. When encrypted saved-object attributes are used, the attribute values generally are not included in exports. This can cause issues on import. Encrypted saved-object attributes cause issues for import/export #82086
  5. When a custom client is created and the saved-objects are "hidden", they can't be accessed using the standard saved-object APIs, and thus import/export does not work. SavedObjects should support importing/exporting even if they're hidden from the API #82027
  6. Kibana-specific data is being stored outside of saved-objects.

CCR/CCS

If our users need true isolation of Kibana instances and their system-indices but want to use a shared data-set, they should use either cross-cluster replication or cross-cluster search. This solution should primarily be considered when the isolation that Spaces provides is determined to be insufficient, as using Spaces is much easier to configure and a less resource-intensive solution.

Tasks

Original discussion: #60053

@kobelb kobelb changed the title [Draft] Removing legacy multitenancy Removing legacy multitenancy Oct 30, 2020
@mshustov
Copy link
Contributor

mshustov commented Nov 3, 2020

@kobelb should we add #82086 to the Task list?

@kobelb
Copy link
Contributor Author

kobelb commented Nov 3, 2020

@restrry yup, good call. Will do so now. I'll be creating GitHub issues for the other known situations where import/export doesn't work and linking them to the problems they need solved to be able to use it.

@sorenlouv
Copy link
Member

@kobelb I don't see an alternative that we can migrate to when kibana.index is removed, that'll solve the use case for Observability laid out in #60053 (comment).

tldr: Developers running local Kibana instances and connecting to a shared Elasticsearch cluster. Elastic Cloud doesn't support CCS/CCR with clusters outside Elastic Cloud (eg. local clusters). And spaces doesn't seem applicable in this context (correct me if I'm wrong).

@kobelb
Copy link
Contributor Author

kobelb commented Nov 19, 2020

@sqren The "kbn es support for CCS/CCR" task above, with no details, was meant to address your alls use-case. I'll flesh this out in more detail here shortly.

@kuisathaverat
Copy link
Contributor

@sqren The "kbn es support for CCS/CCR" task above, with no details, was meant to address your alls use-case. I'll flesh this out in more detail here shortly.

I want to point here that our test clusters have a large amount of data (3-4TB), and up to 80 developers can work at the same time in the same cluster (on different local Kibana instances pointing to a common remote Elasticsearch)

@pgayvallet
Copy link
Contributor

FYI,

have been addressed recently.

Also SavedObjects should support importing/exporting even if they're hidden from the API #82027 is currently in progress. Once this one is done, type owners should theoretically be able to prepare their types for migration.

@Bamieh
Copy link
Member

Bamieh commented Feb 14, 2021

SavedObjects should support importing/exporting even if they're hidden from the API is merged #90178

@pgayvallet
Copy link
Contributor

@kobelb Do we need to somehow actively notify the impacted type owners that they can now start working on 'enabling' import/export for their types?

@Mpdreamz
Copy link
Member

We've internally chased some of the options to at least make it easier to have a local Elasticsearch instance take ownership of CCS but that sadly looks like it will be more involved, still hard to setup and keep up to date with cycling server CA certificates to be a viable alternative.

I am very conflicted about this one:

  • I fully empathize and agree that locking down the kibana index is the way to go
  • @kuisathaverat is working on fantastic tooling for us to unblock us.

At the same time:

  • Not every team (internal/external) has an infra team like ours to get to a workable alternative
  • Needing a local ES instance and automating CA's with cloud and potentially making sure local machines can satisfy SNI validation is a lot to automate too.

It would be great, for example, if I could connect my Kibana to an ES node and specify the single space I want to connect to, so all of my Kibana entities are segmented for me without conflicting with others, or something along those lines, so we can double down on the existing Space segmentation features even while connecting from different Kibana instances.

This does sound a best of both worlds approach, single kibana index but doubling down on spaces to provide segmentation. E.g defining a namespace inside the kibana index? Has this already been explored as an alternative?

@kobelb
Copy link
Contributor Author

kobelb commented Jul 14, 2021

This does sound a best of both worlds approach, single kibana index but doubling down on spaces to provide segmentation. E.g defining a namespace inside the kibana index? Has this already been explored as an alternative?

It's not feasible to use Spaces for developer segmentation. There are a number of subsystems within Kibana that aren't segmented by Space and it would lead to conflicts for developers. For example, if developer A was to add a new saved-object type and developer B doesn't have this saved-object type, this would break developer B's Kibana from starting up.

All default tenants of Kibana that share an Elasticsearch cluster must be the same version and have the same plugins installed; otherwise, things just don't work properly.

@pemontto
Copy link

pemontto commented Aug 4, 2021

What's the expected behaviour if v8 spins up with kibana.index defined? Will check if the .kibana index/alias doesn't exist and reindex that data there, otherwise bail out?

@kobelb
Copy link
Contributor Author

kobelb commented Aug 9, 2021

What's the expected behaviour if v8 spins up with kibana.index defined? Will check if the .kibana index/alias doesn't exist and reindex that data there, otherwise bail out?

Starting in 8.0, users will be unable to specify the kibana.index setting and Kibana will crash on startup if it sees this configuration value.

@cachedout
Copy link
Contributor

cachedout commented Aug 12, 2021

Hi @kobelb (and the rest of the thread) 👋

It appears that #108111 went in which I suspect was related to the work outlined in #101964. As I mention in the PR this has broken the Observability Test Clusters which in turn as disrupted the development workflow for a number of folks on the Observability team.

As has been discussed before, we did know this change was coming but had hoped it would land more toward the October time-frame instead of early August. Sadly, our planned migration work has run into some roadblocks which were are attempting to workaround but that work has not yet been completed.

As mentioned in the PR, I'd like to ask the Kibana folks for a temporary revert of #108111 until such time that we can unblock our migration work. I suspect this can be done by mid to late September if not well before, but we're just not in a position where we can deploy our desired workaround as of today.

The mid-September timeline was recently communicated by our team in an email exchange between @kuisathaverat and @alexh97 on the Kibana team where we responded to the inquiry . (Subject: Removal of legacy multitenancy - Observability blockers).

We're going to immediately search for additional workaround on our end in the event that Kibana isn't able to revert this PR temporarily, but I just wanted to raise the issue in this thread as well for added visibility.

Thanks in advance and apologies for any lack of communication on our end that may have lead to this. :)

cc: @weltenwort

@kobelb
Copy link
Contributor Author

kobelb commented Aug 12, 2021

Sorry about that @cachedout, #108111 has been reverted so you all can continue to proceed to use those settings for the time being.

@chrisronline
Copy link
Contributor

chrisronline commented Sep 20, 2021

@cachedout Is there a ticket we can use to follow and know when we can remerge this PR?

Nevermind, found it! https://github.com/elastic/observability-test-environments/issues/915

@dbuijs
Copy link

dbuijs commented Sep 28, 2021

The reason we need to stand up multiple instances of Kibana attached to the same Elasticsearch cluster is to support different localizations. With this ticket that strategy will no longer work.

Will spaces be able to have different locale settings?

@kobelb
Copy link
Contributor Author

kobelb commented Sep 28, 2021

Hey @dbuijs, you can have multiple Kibana nodes/processes with different i18n.locale settings using the default kibana.index, xpack.reporting.index and xpack.task_manager.index settings. Is there a reason why you changed the *.index settings in addition to the i18n.locale settings for the Kibana nodes/processes?

@dbuijs
Copy link

dbuijs commented Sep 28, 2021

We were concerned that different Kibana nodes that shared the same kibana.index would overwrite each other because this happened with earlier versions of elasticsearch. Note that we need to make changes in index patterns and runtime fields in the different locales. Will different Kibana nodes sharing a kibana.index be able to maintain separate index patterns and display settings for the same elasticsearch indexes?

@kobelb
Copy link
Contributor Author

kobelb commented Sep 28, 2021

Will different Kibana nodes sharing a kibana.index be able to maintain separate index patterns and display settings for the same elasticsearch indexes?

They will not. Our recommendation would be to use spaces to segment your index patterns. Using Kibana's RBAC model you can grant a subset of your users access to different spaces.

@dbuijs
Copy link

dbuijs commented Sep 29, 2021

I can't do that unless I can have different locale settings for different spaces on the same Kibana instance. Has this been considered by the Kibana team? Would it be helpful for me to create a new issue for this?

@kobelb
Copy link
Contributor Author

kobelb commented Sep 29, 2021

@dbuijs that's being tracked in #57629

@tsullivan
Copy link
Member

Based on this quote, there is an issue to raise for Reporting:

Using saved-object management, a user is able to export all of the saved-objects from a legacy multitenant instance to a Space in the default tenant. However, there are currently some Kibana entities that can't be exported and imported using saved-object management. If we're going to no longer allow users to utilize legacy multitenancy, we should provide them a method of transitioning to Spaces, and as such, we'll need to ensure that users have a clear migration path.

Reports are Kibana entities that can't be imported and exported using saved object management. There is no path to transitioning to reports to use Spaces, either.

It should be understood the only way a user will be able to view historical reports that are in a custom index in 7.x is to download them to another form of storage before upgrading to 8.0.

@kobelb is this acceptable?

cc @elastic/kibana-reporting-services

@kobelb
Copy link
Contributor Author

kobelb commented Oct 5, 2021

@kobelb is this acceptable?

IMO, yes because reports can be easily regenerated if they are needed. However, I delegate my real opinion to @alexfrancoeur.

@tsullivan
Copy link
Member

I found a related issue on the text of the deprecation message: #114217

@alexfrancoeur
Copy link

I'd like to hear @sixstringcode's thoughts as well, but this sounds acceptable to me. One thought, outside of downloading them, would be to re-index to a "historical reports" index. Should we / could we make this an optional task that an administrator is asked about during the upgrade and / or part of the upgrade assistant?

@kobelb
Copy link
Contributor Author

kobelb commented Oct 11, 2021

One thought, outside of downloading them, would be to re-index to a "historical reports" index. Should we / could we make this an optional task that an administrator is asked about during the upgrade and / or part of the upgrade assistant?

We could allow them to reindex their reports into a historical reports index; however, it's going to be rather difficult for users to consume this index as the csv/pdf/pngs are base64 encoded binary, and reports are currently per-user specific.

@tsullivan - Couldn't we just allow users to reindex their existing custom reporting indices ${xpack.reporting.index}-* into the default .reporting-* indices? IIRC, the reporting documents are tied to people by their username, so they should continue to work as long as the usernames match.

@pgayvallet
Copy link
Contributor

Couldn't we just allow users to reindex their existing custom reporting indices ${xpack.reporting.index}-* into the default .reporting-* indices?

That would be a manual step performed by an administrator, right?

@kobelb
Copy link
Contributor Author

kobelb commented Oct 12, 2021

That would be a manual step performed by an administrator, right?

Correct.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests