Skip to content
This repository has been archived by the owner on Aug 23, 2023. It is now read-only.

Dont let pruning hold the index too long #1065

Merged
merged 8 commits into from
Sep 25, 2018
Merged

Dont let pruning hold the index too long #1065

merged 8 commits into from
Sep 25, 2018

Conversation

woodsaj
Copy link
Member

@woodsaj woodsaj commented Sep 25, 2018

No description provided.

@woodsaj woodsaj force-pushed the idxPrunePerf branch 2 times, most recently from d215b83 to 7a76648 Compare September 25, 2018 08:43
adds a new configSetting "max-prune-lock-time" that is the amount of
time per second that the prune job can lock the index for.  The default
is 100ms, meaning that the index can only be locked for 10% of the time.
@@ -59,6 +61,7 @@ func ConfigSetup() {
memoryIdx.BoolVar(&TagSupport, "tag-support", false, "enables/disables querying based on tags")
memoryIdx.IntVar(&TagQueryWorkers, "tag-query-workers", 50, "number of workers to spin up to evaluate tag queries")
memoryIdx.IntVar(&matchCacheSize, "match-cache-size", 1000, "size of regular expression cache in tag query evaluation")
memoryIdx.DurationVar(&maxPruneLockTime, "max-prune-lock-time", time.Millisecond*100, "Maximum duration each second a prune job can lock the index.")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we have no proper way to validate duration flags. (see #944)
thus my recommendation is:

  • use a StringVar
  • create a ConfigProcess() function where we can explicitly parse and fail if needed
  • call ConfigProcess() from the Validate remaining settings section in metrictank.go

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I really dont like this model of config mgmt. MemoryIdx already has an Init() error function, can we not just validated the config settings there?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we only initialize the index until after startup (e.g. until after initializing the store, inputs, etc)
configuration should be validated before we do startup
I agree our config system is not pretty, and we will refactor it, but for now it's the best way to do it.

// provided context is done. When the amount of time spent on task (the time is determined
// by calls to "Add()") every "window" duration is more then "limit", then calls to
// Wait() will block until the start if the next window period.
func NewTimeLimiter(ctx context.Context, window, limit time.Duration) *TimeLimiter {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think a more common approach for the api design here would be to have a Stop method instead of using a context merely for signaling shutdown.
that said, if we implemented Stop, we'd have to make it use a channel anyway to communicate to run() so it seems like a minor difference.

@Dieterbe
Copy link
Contributor

for the record, i've run the benches with cpu profiling enabled, to see if a significant portion is taken by chansend, chanreceive, chan select, or any of the time operations. but it looks like this is not the case.
TimeLimiter.Add takes about 1% cumulatively. so i don't really see significant overhead from TimeLimiter (which is great)

Copy link
Member Author

@woodsaj woodsaj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@Dieterbe Dieterbe merged commit ff14f10 into master Sep 25, 2018
@Dieterbe Dieterbe deleted the idxPrunePerf branch October 29, 2018 09:06
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants