Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use debug_assert! instead of assert! where possible #52956

Closed
wants to merge 1 commit into from

Conversation

ljedrz
Copy link
Contributor

@ljedrz ljedrz commented Aug 1, 2018

We are using a lot of assert!ions in the compiler. I propose using debug_assert!s where possible instead, for the following reasons:

  1. assert!s are not free - they incur both compilation and runtime performance penalties
  2. test builds should be run with debug assertions on; debug_assert!s should suffice to notify us when something is wrong
  3. the compiler code (besides core/std etc.) is not user-facing, so we don't have to be prepared e.g. for invalid arguments coming from runtime input
  4. regular assert!s panic during runtime, so it's not like we can allow such behavior anyway - we need to ensure it doesn't happen before we ship

I didn't alter tests (no runtime penalty), user-facing code (it requires a more thorough analysis; I might do it if this proposal passes) and a few modules I am unfamiliar with.

@rust-highfive
Copy link
Collaborator

r? @petrochenkov

(rust_highfive has picked a reviewer for you, use r? to override)

@rust-highfive rust-highfive added the S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. label Aug 1, 2018
@ChrisJefferson
Copy link
Contributor

Some of these asserts are inside unsafe code and my initial (uninformed) glance is they are guarding against undefined behaviour occuring.

@Mark-Simulacrum
Copy link
Member

I don't think this is the correct approach to doing something like this. We have hundreds of issues that report the compiler ICE-ing today -- that means that panics and asserts do in fact catch bugs in shipped versions of the compiler. Furthermore, it's unclear whether a thorough review of each assert was done to determine whether it is valid to turn it into a debug_assert (read: can't cause misbehavior of the compiler, including not failing to compile invalid programs).

I do think that if you're interested a PR that is more limited in scope and perhaps with documentation (in commit messages, ideally) as to why a certain group of changes is reasonable would be better received (at least by me).

@ljedrz
Copy link
Contributor Author

ljedrz commented Aug 1, 2018

@Mark-Simulacrum sounds reasonable; I picked a pretty broad scope to gather opinions (e.g. if it is a plain bad idea for some modules) and ideally to do a perf run of a try build to know if this makes a big performance impact, i.e. that it is worthwhile to investigate this further and where (mir? nll?).

@Mark-Simulacrum
Copy link
Member

Ideally perf runs should be done locally for investigative work (perf.rlo doesn't really present fine enough granularity).

We can try, though.

@bors try

@bors
Copy link
Contributor

bors commented Aug 1, 2018

⌛ Trying commit c3386db with merge 6798574...

bors added a commit that referenced this pull request Aug 1, 2018
Use debug_assert! instead of assert! where possible

We are using a *lot* of `assert!`ions in the compiler. I propose using `debug_assert!`s where possible instead, for the following reasons:
1. `assert!`s are not free - they incur both compilation and runtime performance penalties
2. test builds should be run with debug assertions on; `debug_assert!`s should suffice to notify us when something is wrong
3. the compiler code (besides `core`/`std` etc.) is not user-facing, so we don't have to be prepared e.g. for invalid arguments coming from runtime input
4. regular `assert!`s panic during runtime, so it's not like we can allow such behavior anyway - we need to ensure it doesn't happen before we ship

I didn't alter tests (no runtime penalty), user-facing code (it requires a more thorough analysis; I might do it if this proposal passes) and a few modules I am unfamiliar with.
@bors
Copy link
Contributor

bors commented Aug 1, 2018

☀️ Test successful - status-travis
State: approved= try=True

@ljedrz
Copy link
Contributor Author

ljedrz commented Aug 2, 2018

The build was successful, but I can't seem to be able to do a comparison of 6798574 against its parent in the perf page; do perf runs of try builds need to be scheduled manually?

@kennytm
Copy link
Member

kennytm commented Aug 2, 2018

@rust-timer build 6798574

@kennytm kennytm added the S-waiting-on-perf Status: Waiting on a perf run to be completed. label Aug 2, 2018
@rust-timer
Copy link
Collaborator

Success: Queued 6798574 with parent 11f812a, comparison URL.

@ljedrz
Copy link
Contributor Author

ljedrz commented Aug 2, 2018

Looks like there are some pretty nice wins, especially for issue #46449, regression #31157 and #38528.

Maybe those can point to a more specific direction where auditing assert!s would be a good idea?

@varkor
Copy link
Member

varkor commented Aug 4, 2018

Worth mentioning here that #53025 has been opened with some specific cases. Personally, I would prefer some justification other than simply performance for why these changes are unlikely to cause regressions in errors, but I'd welcome other opinions.

@kennytm kennytm removed the S-waiting-on-perf Status: Waiting on a perf run to be completed. label Aug 4, 2018
@petrochenkov
Copy link
Contributor

I think this should be done on case by case basis when profiling shows there's a problem
I've been personally adding asserts with trivially calculated conditions for suspicious cases I wasn't sure of and for cases that could be broken due to changes in other parts of code, I wouldn't want them to be turned into debug-only.

So my disposition so far is to close this PR and send PRs like #53025 for non-trivial conditions or asserts showing up on profiles.

@petrochenkov petrochenkov added S-waiting-on-author Status: This is awaiting some action (such as code changes or more information) from the author. and removed S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. labels Aug 4, 2018
@ljedrz
Copy link
Contributor Author

ljedrz commented Aug 4, 2018

@petrochenkov Agreed - trivial assertions should be perfectly fine, even in hot spots.

I'm happy that the perf run was able to indicate which assertions are complex; it would be much more difficult to determine it by just browsing the code. This PR has served its purpose.

@ljedrz ljedrz closed this Aug 4, 2018
@ljedrz ljedrz deleted the debug_asserts branch August 8, 2018 06:56
kennytm added a commit to kennytm/rust that referenced this pull request Aug 10, 2018
Consider changing assert! to debug_assert! when it calls visit_with

The perf run from rust-lang#52956 revealed that there were 3 benchmarks that benefited most from changing `assert!`s to `debug_assert!`s:

- issue rust-lang#46449: avg -4.7% for -check
- deeply-nested (AKA rust-lang#38528): avg -3.4% for -check
- regression rust-lang#31157: avg -3.2% for -check

I analyzed their fixing PRs and decided to look for potentially heavy assertions in the files they modified. I noticed that all of the non-trivial ones contained indirect calls to `visit_with()`.

It might be a good idea to consider changing `assert!` to `debug_assert!` in those places in order to get the performance wins shown by the benchmarks.
GuillaumeGomez added a commit to GuillaumeGomez/rust that referenced this pull request Aug 12, 2018
Consider changing assert! to debug_assert! when it calls visit_with

The perf run from rust-lang#52956 revealed that there were 3 benchmarks that benefited most from changing `assert!`s to `debug_assert!`s:

- issue rust-lang#46449: avg -4.7% for -check
- deeply-nested (AKA rust-lang#38528): avg -3.4% for -check
- regression rust-lang#31157: avg -3.2% for -check

I analyzed their fixing PRs and decided to look for potentially heavy assertions in the files they modified. I noticed that all of the non-trivial ones contained indirect calls to `visit_with()`.

It might be a good idea to consider changing `assert!` to `debug_assert!` in those places in order to get the performance wins shown by the benchmarks.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
S-waiting-on-author Status: This is awaiting some action (such as code changes or more information) from the author.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants