-
-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(escalating-issues): Detect when an issue starts escalating #47843
Conversation
Codecov Report
Additional details and impacted files@@ Coverage Diff @@
## master #47843 +/- ##
==========================================
+ Coverage 77.50% 80.81% +3.31%
==========================================
Files 4759 4763 +4
Lines 201207 201371 +164
Branches 11594 11594
==========================================
+ Hits 155948 162745 +6797
+ Misses 45003 38370 -6633
Partials 256 256
|
cache.set(forecast_cache_key, escalating_forecast, forecast_cache_duration) | ||
|
||
# Check if current event occurance is greater than forecast for today's date | ||
group_daily_count = get_group_daily_count(group.project.id, group.id) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's keep an eye on this and make sure it doesn't add too much load to snuba. If it does, one option we have is to only check is_escalating
for an issue at most once per minute.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(This might be a lot more important when we start auto archiving issues as ignore until escalating
)
return | ||
job["has_reappeared"] = False | ||
return | ||
|
||
with metrics.timer("post_process.process_snoozes.duration"): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be great if we could refactor this block into a new function in a different PR.
forecast_cache_duration = ( | ||
(escalating_forecast[0] + timedelta(days=ONE_WEEK_DURATION)).date() - date_now | ||
).total_seconds() | ||
cache.set(forecast_cache_key, escalating_forecast, forecast_cache_duration) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was thinking that we may want to do this as part of the save
call in here:
sentry/src/sentry/issues/escalating_group_forecast.py
Lines 39 to 44 in 0aaef53
def save(self) -> None: | |
nodestore.set( | |
self.build_storage_identifier(self.project_id, self.group_id), | |
self.to_dict(), | |
ttl=timedelta(TWO_WEEKS_IN_DAYS_TTL), | |
) |
escalating_forecast = cache.get(forecast_cache_key) | ||
date_now = datetime.now().date() | ||
if escalating_forecast is None: | ||
escalating_forecast = EscalatingGroupForecast.fetch(group.project.id, group.id) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can add the getting from cache directly to the fetch
method, thus, simplifying the is_escalating
method.
__all__ = ["query_groups_past_counts", "parse_groups_past_counts"] | ||
|
||
REFERRER = "sentry.issues.escalating" | ||
ELEMENTS_PER_SNUBA_PAGE = 10000 # This is the maximum value for Snuba | ||
# The amount of data needed to generate a group forecast | ||
BUCKETS_PER_GROUP = 7 * 24 | ||
ONE_WEEK_DURATION = 7 | ||
IS_ESCALATING_REFERRER = "sentry.issues.escalating.is_escalating" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I realized it is better that we import the referrers from referrer.py.
WOR-2762
Acceptance Criteria: