Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

remove Pkg.add from Benchmark #711

Merged
merged 26 commits into from
Mar 5, 2024
Merged

remove Pkg.add from Benchmark #711

merged 26 commits into from
Mar 5, 2024

Conversation

palday
Copy link
Member

@palday palday commented Aug 23, 2023

No description provided.

@codecov
Copy link

codecov bot commented Aug 23, 2023

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 96.33%. Comparing base (ff82b8f) to head (4608e96).

❗ Current head 4608e96 differs from pull request most recent head 08764ed. Consider uploading reports for the commit 08764ed to get more accurate results

Additional details and impacted files
@@            Coverage Diff             @@
##             main     #711      +/-   ##
==========================================
+ Coverage   96.27%   96.33%   +0.05%     
==========================================
  Files          34       34              
  Lines        3356     3356              
==========================================
+ Hits         3231     3233       +2     
+ Misses        125      123       -2     
Flag Coverage Δ
current 96.27% <ø> (ø)
minimum 96.23% <ø> (?)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@github-actions
Copy link
Contributor

github-actions bot commented Aug 23, 2023

Benchmark Report for /home/runner/work/MixedModels.jl/MixedModels.jl

Job Properties

  • Time of benchmarks:
    • Target: 5 Mar 2024 - 19:49
    • Baseline: 5 Mar 2024 - 19:57
  • Package commits:
    • Target: a99bec
    • Baseline: 510dcc
  • Julia commits:
    • Target: bd47ec
    • Baseline: bd47ec
  • Julia command flags:
    • Target: None
    • Baseline: -C,native,-J/opt/hostedtoolcache/julia/1.10.2/x64/lib/julia/sys.so,-g1,-O3,-e,using Pkg; Pkg.update()
  • Environment variables:
    • Target: None
    • Baseline: None

Results

A ratio greater than 1.0 denotes a possible regression (marked with ❌), while a ratio less
than 1.0 denotes a possible improvement (marked with ✅). Only significant results - results
that indicate possible regressions or improvements - are shown below (thus, an empty table means that all
benchmark results remained invariant between builds).

ID time ratio memory ratio
["crossed", "insteval:1"] 0.74 (10%) ✅ 1.00 (5%)
["crossed", "mrk17_exp1:1"] 8.40 (10%) ❌ 1.00 (5%)
["crossedvector", "kb07:3"] 0.46 (10%) ✅ 1.00 (5%)
["crossedvector", "mrk17_exp1:2"] 0.77 (10%) ✅ 1.00 (5%)

Benchmark Group List

Here's a list of all the benchmark groups executed by this job:

  • ["crossed"]
  • ["crossedvector"]
  • ["nested"]
  • ["singlevector"]

Julia versioninfo

Target

Julia Version 1.10.2
Commit bd47eca2c8a (2024-03-01 10:14 UTC)
Build Info:
  Official https://julialang.org/ release
Platform Info:
  OS: Linux (x86_64-linux-gnu)
      Ubuntu 22.04.4 LTS
  uname: Linux 6.5.0-1015-azure #15~22.04.1-Ubuntu SMP Tue Feb 13 01:15:12 UTC 2024 x86_64 x86_64
  CPU: AMD EPYC 7763 64-Core Processor: 
              speed         user         nice          sys         idle          irq
       #1  3037 MHz        683 s          0 s        164 s       2483 s          0 s
       #2  2624 MHz        961 s          0 s        334 s       2036 s          0 s
       #3  3290 MHz       1666 s          0 s        118 s       1551 s          0 s
       #4  3243 MHz       1438 s          0 s        109 s       1790 s          0 s
  Memory: 15.606491088867188 GB (13945.26171875 MB free)
  Uptime: 335.88 sec
  Load Avg:  1.57  1.15  0.54
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-15.0.7 (ORCJIT, znver3)
Threads: 1 default, 0 interactive, 1 GC (on 4 virtual cores)

Baseline

Julia Version 1.10.2
Commit bd47eca2c8a (2024-03-01 10:14 UTC)
Build Info:
  Official https://julialang.org/ release
Platform Info:
  OS: Linux (x86_64-linux-gnu)
      Ubuntu 22.04.4 LTS
  uname: Linux 6.5.0-1015-azure #15~22.04.1-Ubuntu SMP Tue Feb 13 01:15:12 UTC 2024 x86_64 x86_64
  CPU: AMD EPYC 7763 64-Core Processor: 
              speed         user         nice          sys         idle          irq
       #1  2587 MHz       1933 s          0 s        554 s       5399 s          0 s
       #2  2445 MHz       2373 s          0 s        993 s       4524 s          0 s
       #3  2445 MHz       3854 s          0 s        386 s       3658 s          0 s
       #4  3242 MHz       3211 s          0 s        335 s       4351 s          0 s
  Memory: 15.606491088867188 GB (13658.625 MB free)
  Uptime: 792.75 sec
  Load Avg:  1.78  1.64  1.06
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-15.0.7 (ORCJIT, znver3)
Threads: 1 default, 0 interactive, 1 GC (on 4 virtual cores)

@palday palday marked this pull request as ready for review March 5, 2024 19:40
@palday palday requested a review from dmbates March 5, 2024 20:55
Copy link
Collaborator

@dmbates dmbates left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for cleaning this up.

I feel that we should think about whether we want to continue to do package benchmarks. It seems that the results are so variable as not to be worthwhile.

For example, if you check out the main branch and run the package benchmark you are essentially benchmarking the main branch against itself and you frequently end up with "significant" differences.

@palday
Copy link
Member Author

palday commented Mar 5, 2024

Thanks for cleaning this up.

I feel that we should think about whether we want to continue to do package benchmarks. It seems that the results are so variable as not to be worthwhile.

For example, if you check out the main branch and run the package benchmark you are essentially benchmarking the main branch against itself and you frequently end up with "significant" differences.

I agree. I can disable this as CI and we can revisit this if we find a good set of benchmarks / thresholds so that we're not just seeing noise in GitHub's server farm.

@palday palday merged commit cfd3023 into main Mar 5, 2024
7 of 8 checks passed
@palday palday deleted the pa/benchmark_ci2 branch March 5, 2024 22:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants