Skip to content
This repository has been archived by the owner on Nov 15, 2023. It is now read-only.

Extend the lower bounds of some of the benchmarks to also include 0 #12386

Merged

Conversation

koute
Copy link
Contributor

@koute koute commented Sep 29, 2022

This PR extends the lower bounds of the components of some of the benchmarks to also include 0.

This is, in a way, a followup of #11806 where I've made it so that the base weights are always set to be equal to the minimum execution time. That fixed a problem where (due to how linear regression works) the base weights could sometimes be zero, but at a cost: some of the weights could now be too high since their benchmarks were never actually executed over the full range of the inputs (e.g see this comment #12325 (comment))

So here I went through our benchmarks and modified those which components started at 1 to now start at 0. I didn't do this blindly nor exhaustively (we still have a lot of such benchmarks); I did it either in places where it made sense, or in places where #12325 has shown that there might potentially be a problem. I mostly looked at the benchmarks where the DB reads/writes changed, and at those benchmarks where the base weights changed drastically.

I ran every benchmark I modified and there should not be any runtime errors generated by the lowered bounds. Most of the changes are mostly trivial.

@koute koute added A0-please_review Pull request needs code review. B0-silent Changes should not be mentioned in any release notes C1-low PR touches the given topic and has a low impact on builders. D3-trivial 🧸 PR contains trivial changes in a runtime directory that do not require an audit labels Sep 29, 2022
Copy link
Member

@shawntabrizi shawntabrizi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thank you

after this is merged, i will run everything again, and see what else we can find, but this is probably solving most of the issues :)

@koute
Copy link
Contributor Author

koute commented Sep 30, 2022

thank you

No need to thank me; it was my PR so it's my responsibility to see it through and get it up to a standard of quality that I (we) expect, or die trying. (: I should be thanking you for being patient with me.

after this is merged, i will run everything again, and see what else we can find, but this is probably solving most of the issues :)

One major group of benchmarks that I didn't touch and which are potentially problematic (before my PR at least on my machine at least one of those was generating with a weight of zero) were the benches in election-provider-multi-phase. And they use bounds like this:

// number of votes in snapshot.
let v in (T::BenchmarkingConfig::VOTERS[0]) .. T::BenchmarkingConfig::VOTERS[1];
// number of targets in snapshot.
let t in (T::BenchmarkingConfig::TARGETS[0]) .. T::BenchmarkingConfig::TARGETS[1];

With the bounds actually defined in bin/node/runtime/src/lib.rs:

/// The numbers configured here could always be more than the the maximum limits of staking pallet
/// to ensure election snapshot will not run out of memory. For now, we set them to smaller values
/// since the staking is bounded and the weight pipeline takes hours for this single pallet.
pub struct ElectionProviderBenchmarkConfig;
impl pallet_election_provider_multi_phase::BenchmarkingConfig for ElectionProviderBenchmarkConfig {
    const VOTERS: [u32; 2] = [1000, 2000];
    const TARGETS: [u32; 2] = [500, 1000];
    const ACTIVE_VOTERS: [u32; 2] = [500, 800];
    const DESIRED_TARGETS: [u32; 2] = [200, 400];
    const SNAPSHOT_MAXIMUM_VOTERS: u32 = 1000;
    const MINER_MAXIMUM_VOTERS: u32 = 1000;
    const MAXIMUM_TARGETS: u32 = 300;
}

So the thing is, from what I can see there is an internal invariant for those benchmarks that TARGETS <= VOTERS, ACTIVE_VOTERS <= VOTERS and DESIRED_TARGETS <= TARGETS, so just naively lowering the bounds doesn't work. So I could see two approaches we could take here:

a) Deal with this in the benchmarks one-by-one and just use std::cmp::min/std::cmp::max to make sure the invariants hold.
b) Modify the benchmarking machinery so that you could specify those invariants, and then the benchmarking CLI would only run the benchmarks at points which make sense. That is, even if you'd set everything to start at 1 it would never try to run the benchmark when TARGETS > VOTERS.

Main downside of (a) is that it will still generate some numbers for the values of components which don't make sense, which I'm pretty sure will negatively affect the results when done on this scale. So I think (b) would be preferable here, even though it would require introducing some extra complexity in the benchmarking machinery. (Although I suppose we don't need to support arbitrary constraints; just very simple ones as needed here.)

@shawntabrizi Does this sound reasonable to you?

Also, and this is only tangentially related, I am intrigued by the comment that the election pallet is extremely slow and takes hours to run. Now I kinda want to see if it'd be possible to make it faster. (:

Copy link
Member

@ggwpez ggwpez left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should merge this if it already improves the situation.

Maybe @kianenigma can tell us if we can somehow improve the constants in the multi-phase benches?

@ggwpez
Copy link
Member

ggwpez commented Oct 5, 2022

Can we merge this @koute ?

@koute
Copy link
Contributor Author

koute commented Oct 6, 2022

@ggwpez Ah sorry, I somehow missed that you +1'd it! Yes, we can! I'll just resolve the conflicts.

@koute
Copy link
Contributor Author

koute commented Oct 7, 2022

bot merge

@paritytech-processbot paritytech-processbot bot merged commit a8e951d into paritytech:master Oct 7, 2022
ark0f pushed a commit to gear-tech/substrate that referenced this pull request Feb 27, 2023
…paritytech#12386)

* Extend the lower bounds of some of the benchmarks to also include `0`

* Fix verify snippet for `pallet_bounties/spend_funds`
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
A0-please_review Pull request needs code review. B0-silent Changes should not be mentioned in any release notes C1-low PR touches the given topic and has a low impact on builders. D3-trivial 🧸 PR contains trivial changes in a runtime directory that do not require an audit
Projects
Development

Successfully merging this pull request may close these issues.

3 participants