Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

many major compile-time perf regressions in jld-day15-parser #34042

Closed
brson opened this issue Jun 2, 2016 · 15 comments
Closed

many major compile-time perf regressions in jld-day15-parser #34042

brson opened this issue Jun 2, 2016 · 15 comments
Labels
I-compiletime Issue: Problems and improvements with respect to compile times. regression-from-stable-to-nightly Performance or correctness regression from stable to nightly. T-compiler Relevant to the compiler team, which will review and decide on the PR/issue.

Comments

@brson
Copy link
Contributor

brson commented Jun 2, 2016

Look at the graph.

@brson brson added I-slow Issue: Problems and improvements with respect to performance of generated code. regression-from-stable-to-nightly Performance or correctness regression from stable to nightly. labels Jun 2, 2016
@brson
Copy link
Contributor Author

brson commented Jun 2, 2016

I'm having a hard time getting the rustc-perf comparison page to tell me which passes regressed.

@sfackler sfackler added the T-compiler Relevant to the compiler team, which will review and decide on the PR/issue. label Jun 2, 2016
@eddyb
Copy link
Member

eddyb commented Jun 2, 2016

Also see #33889.

@sanxiyn sanxiyn added I-compiletime Issue: Problems and improvements with respect to compile times. and removed I-slow Issue: Problems and improvements with respect to performance of generated code. labels Jun 3, 2016
@brson
Copy link
Contributor Author

brson commented Jun 4, 2016

The rustc-perf site changed and the link no longer goes somewhere useful.

@durka
Copy link
Contributor

durka commented Jun 4, 2016

Fixed link (s/index.html/graphs.html)

@eddyb
Copy link
Member

eddyb commented Jun 4, 2016

@brson What I do is use Tab + Spacebar to bisect the pass set.

Meaningful variance by passes (can also see them combined):

EDIT: Ahh, missed the type-checking one. Also, there's some interesting correlation, almost as if the whole compiler was running faster for a short time. Changes to jemalloc? #33425? The systems running these benchmarks?
It would be useful if we had more metadata. Are the dates nightlies? Then we can probably use the builder logs to find out the commit ranges for each change.

@eddyb
Copy link
Member

eddyb commented Jun 4, 2016

A rough approximation for the strange dip is this range of commits, which includes #33491. Not much else of relevance AFAICT.

EDIT: This range might also include the large regression after the dip.

@eddyb
Copy link
Member

eddyb commented Jun 4, 2016

Looks like the first jump in translation, 40s -> 47s is indeed caused by #33425. Or at least, the date matches.

@eddyb
Copy link
Member

eddyb commented Jun 4, 2016

For the last jump in translation, 74s -> 81s, I have another approximate range of commits with #33602 as a potential culprit (also my fault? 😞).

@nrc Could we run the same process that generates the graphs on individual PRs, to confirm such regressions?

@arielb1
Copy link
Contributor

arielb1 commented Jun 5, 2016

This seems to be fixed on the latest revision - translation takes 5s on my laptop. Probably #33816 too. I will wait for next nightly before closing.

@nrc
Copy link
Member

nrc commented Jun 7, 2016

@eddyb there is no way to do that, sorry. It's a feature nmatsakis has also requested. Easiest way to simulate is just to run time-passes locally before and after the PR.

@arielb1
Copy link
Contributor

arielb1 commented Jun 7, 2016

@nrc

I think we should do benchmarks for each bors merge and store that. Maybe even do that on the buildbots?

@eddyb
Copy link
Member

eddyb commented Jun 7, 2016

@arielb1 Sadly some of the buildbots share resources so they're not deterministic at all.

@nrc
Copy link
Member

nrc commented Jun 7, 2016

@arielb1 we don't have the hardware for that at the moment. We could do this if we decided it was useful enough to justify buying and maintaining more hardware, but I don't think that is the case for now.

@nikomatsakis
Copy link
Contributor

This chart suggests that our compilation time has (very recently, in the last few days) gotten better since Dec of 2015, which seems to be the point that @brson was originally comparing against.

@brson
Copy link
Contributor Author

brson commented Jun 11, 2016

Sweet.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
I-compiletime Issue: Problems and improvements with respect to compile times. regression-from-stable-to-nightly Performance or correctness regression from stable to nightly. T-compiler Relevant to the compiler team, which will review and decide on the PR/issue.
Projects
None yet
Development

No branches or pull requests

8 participants