From c0f5ff3f436af98321bf3804dda998739a6917bd Mon Sep 17 00:00:00 2001 From: "Documenter.jl" Date: Tue, 18 Jun 2024 10:38:52 +0000 Subject: [PATCH] build based on e36d467 --- dev/.documenter-siteinfo.json | 2 +- dev/_index/index.html | 2 +- dev/api/index.html | 31 +++++++++-------- dev/index.html | 61 ++++++++++++++++++++++------------ dev/objects.inv | Bin 536 -> 536 bytes dev/search_index.js | 2 +- 6 files changed, 61 insertions(+), 37 deletions(-) diff --git a/dev/.documenter-siteinfo.json b/dev/.documenter-siteinfo.json index 9431efa..218db1d 100644 --- a/dev/.documenter-siteinfo.json +++ b/dev/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.10.4","generation_timestamp":"2024-06-10T06:26:19","documenter_version":"1.4.1"}} \ No newline at end of file +{"documenter":{"julia_version":"1.10.4","generation_timestamp":"2024-06-18T10:38:49","documenter_version":"1.4.1"}} \ No newline at end of file diff --git a/dev/_index/index.html b/dev/_index/index.html index f45c94d..6e0dfc6 100644 --- a/dev/_index/index.html +++ b/dev/_index/index.html @@ -1,2 +1,2 @@ -- · AirspeedVelocity.jl
+- · AirspeedVelocity.jl
diff --git a/dev/api/index.html b/dev/api/index.html index 0108ace..61f4731 100644 --- a/dev/api/index.html +++ b/dev/api/index.html @@ -1,15 +1,20 @@ -API · AirspeedVelocity.jl

API

Creating benchmarks

From the command line:

AirspeedVelocity.BenchPkg.benchpkgFunction
benchpkg package_name [-r --rev <arg>]
-                      [--url <arg>]
-                      [--path <arg>]
-                      [-o, --output-dir <arg>]
-                      [-e, --exeflags <arg>]
-                      [-a, --add <arg>]
-                      [-s, --script <arg>]
-                      [--bench-on <arg>]
-                      [-f, --filter <arg>]
-                      [--nsamples-load-time <arg>]
-                      [--tune]

Benchmark a package over a set of revisions.

Arguments

  • package_name: Name of the package.

Options

  • -r, --rev <arg>: Revisions to test (delimit by comma). Use dirty to benchmark the current state of the package at path (and not a git commit).
  • --url <arg>: URL of the package.
  • --path <arg>: Path of the package.
  • -o, --output-dir <arg>: Where to save the JSON results.
  • -e, --exeflags <arg>: CLI flags for Julia (default: none).
  • -a, --add <arg>: Extra packages needed (delimit by comma).
  • -s, --script <arg>: The benchmark script. Default: benchmark/benchmarks.jl downloaded from stable.
  • --bench-on <arg>: If the script is not set, this specifies the revision at which to download benchmark/benchmarks.jl from the package.
  • -f, --filter <arg>: Filter the benchmarks to run (delimit by comma).
  • --nsamples-load-time <arg>: Number of samples to take when measuring load time of the package (default: 5). (This means starting a Julia process for each sample.)

Flags

  • --tune: Whether to run benchmarks with tuning (default: false).
source

Or, directly from Julia:

AirspeedVelocity.Utils.benchmarkMethod
benchmark(package_name::String, rev::Union{String,Vector{String}}; output_dir::String=".", script::Union{String,Nothing}=nothing, tune::Bool=false, exeflags::Cmd=``, extra_pkgs::Vector{String}=String[])

Run benchmarks for a given Julia package.

This function runs the benchmarks specified in the script for the package defined by the package_spec. If script is not provided, the function will use the default benchmark script located at {PACKAGE_SRC_DIR}/benchmark/benchmarks.jl.

The benchmarks are run using the SUITE variable defined in the benchmark script, which should be of type BenchmarkTools.BenchmarkGroup. The benchmarks can be run with or without tuning depending on the value of the tune argument.

The results of the benchmarks are saved to a JSON file named results_packagename@rev.json in the specified output_dir.

Arguments

  • package_name::String: The name of the package for which to run the benchmarks.
  • rev::Union{String,Vector{String}}: The revision of the package for which to run the benchmarks. You can also pass a vector of revisions to run benchmarks for multiple versions of a package.
  • output_dir::String=".": The directory where the benchmark results JSON file will be saved (default: current directory).
  • script::Union{String,Nothing}=nothing: The path to the benchmark script file. If not provided, the default script at {PACKAGE}/benchmark/benchmarks.jl will be used.
  • tune::Bool=false: Whether to run benchmarks with tuning (default: false).
  • exeflags::Cmd=``: Additional execution flags for running the benchmark script (default: empty).
  • extra_pkgs::Vector{String}=String[]: Additional packages to add to the benchmark environment.
  • url::Union{String,Nothing}=nothing: URL of the package.
  • path::Union{String,Nothing}=nothing: Path to the package.
  • benchmark_on::Union{String,Nothing}=nothing: If the benchmark script file is to be downloaded, this specifies the revision to use.
  • filter_benchmarks::Vector{String}=String[]: Filter the benchmarks to run (default: all).
  • nsamples_load_time::Int=5: Number of samples to take for the time-to-load benchmark.
source
AirspeedVelocity.Utils.benchmarkMethod
benchmark(package_specs::Union{PackageSpec,Vector{PackageSpec}}; output_dir::String=".", script::Union{String,Nothing}=nothing, tune::Bool=false, exeflags::Cmd=``, extra_pkgs::Vector{String}=String[])

Run benchmarks for a given Julia package.

This function runs the benchmarks specified in the script for the package defined by the package_spec. If script is not provided, the function will use the default benchmark script located at {PACKAGE_SRC_DIR}/benchmark/benchmarks.jl.

The benchmarks are run using the SUITE variable defined in the benchmark script, which should be of type BenchmarkTools.BenchmarkGroup. The benchmarks can be run with or without tuning depending on the value of the tune argument.

The results of the benchmarks are saved to a JSON file named results_packagename@rev.json in the specified output_dir.

Arguments

  • package::Union{PackageSpec,Vector{PackageSpec}}: The package specification containing information about the package for which to run the benchmarks. You can also pass a vector of package specifications to run benchmarks for multiple versions of a package.
  • output_dir::String=".": The directory where the benchmark results JSON file will be saved (default: current directory).
  • script::Union{String,Nothing}=nothing: The path to the benchmark script file. If not provided, the default script at {PACKAGE}/benchmark/benchmarks.jl will be used.
  • tune::Bool=false: Whether to run benchmarks with tuning (default: false).
  • exeflags::Cmd=``: Additional execution flags for running the benchmark script (default: empty).
  • extra_pkgs::Vector{String}=String[]: Additional packages to add to the benchmark environment.
  • benchmark_on::Union{String,Nothing}=nothing: If the benchmark script file is to be downloaded, this specifies the revision to use.
  • filter_benchmarks::Vector{String}=String[]: Filter the benchmarks to run (default: all).
  • nsamples_load_time::Int=5: Number of samples to take for the time-to-load benchmark.
source

Loading and visualizing benchmarks

From the command line:

AirspeedVelocity.BenchPkgTable.benchpkgtableFunction
benchpkgtable package_name [-r --rev <arg>] [-i --input-dir <arg>]
-                           [--ratio] [--mode <arg>]

Print a table of the benchmarks of a package as created with benchpkg.

Arguments

  • package_name: Name of the package.

Options

  • -r, --rev <arg>: Revisions to test (delimit by comma).
  • -i, --input-dir <arg>: Where the JSON results were saved (default: ".").

Flags

  • --ratio: Whether to include the ratio (default: false). Only applies when comparing two revisions.
  • --mode: Table mode(s). Valid values are "time" (default), to print the benchmark time, or "memory", to print the allocation and memory usage. Both options can be passed, if delimited by comma.
source
AirspeedVelocity.BenchPkgPlot.benchpkgplotFunction
benchpkgplot package_name [-r --rev <arg>] [-i --input-dir <arg>]
+API · AirspeedVelocity.jl

API

Creating benchmarks

From the command line:

AirspeedVelocity.BenchPkg.benchpkgFunction
benchpkg [package_name] [-r --rev <arg>]
+                        [--url <arg>]
+                        [--path <arg>]
+                        [-o, --output-dir <arg>]
+                        [-e, --exeflags <arg>]
+                        [-a, --add <arg>]
+                        [-s, --script <arg>]
+                        [--bench-on <arg>]
+                        [-f, --filter <arg>]
+                        [--nsamples-load-time <arg>]
+                        [--tune]
+                        [--dont-print]

Benchmark a package over a set of revisions.

Arguments

  • package_name: Name of the package. If not given, the package is assumed to be the current directory.

Options

  • -r, --rev <arg>: Revisions to test (delimit by comma). Use dirty to benchmark the current state of the package at path (and not a git commit). The default is {DEFAULT},dirty, which will attempt to find the default branch of the package.
  • --url <arg>: URL of the package.
  • --path <arg>: Path of the package. The default is . if other arguments are not given.
  • -o, --output-dir <arg>: Where to save the JSON results. The default is ..
  • -e, --exeflags <arg>: CLI flags for Julia (default: none).
  • -a, --add <arg>: Extra packages needed (delimit by comma).
  • -s, --script <arg>: The benchmark script. Default: benchmark/benchmarks.jl downloaded from stable.
  • --bench-on <arg>: If the script is not set, this specifies the revision at which to download benchmark/benchmarks.jl from the package.
  • -f, --filter <arg>: Filter the benchmarks to run (delimit by comma).
  • --nsamples-load-time <arg>: Number of samples to take when measuring load time of the package (default: 5). (This means starting a Julia process for each sample.)
  • --dont-print: Don't print the table.

Flags

  • --tune: Whether to run benchmarks with tuning (default: false).
source

Or, directly from Julia:

AirspeedVelocity.Utils.benchmarkMethod
benchmark(package_name::String, rev::Union{String,Vector{String}}; output_dir::String=".", script::Union{String,Nothing}=nothing, tune::Bool=false, exeflags::Cmd=``, extra_pkgs::Vector{String}=String[])

Run benchmarks for a given Julia package.

This function runs the benchmarks specified in the script for the package defined by the package_spec. If script is not provided, the function will use the default benchmark script located at {PACKAGE_SRC_DIR}/benchmark/benchmarks.jl.

The benchmarks are run using the SUITE variable defined in the benchmark script, which should be of type BenchmarkTools.BenchmarkGroup. The benchmarks can be run with or without tuning depending on the value of the tune argument.

The results of the benchmarks are saved to a JSON file named results_packagename@rev.json in the specified output_dir.

Arguments

  • package_name::String: The name of the package for which to run the benchmarks.
  • rev::Union{String,Vector{String}}: The revision of the package for which to run the benchmarks. You can also pass a vector of revisions to run benchmarks for multiple versions of a package.
  • output_dir::String=".": The directory where the benchmark results JSON file will be saved (default: current directory).
  • script::Union{String,Nothing}=nothing: The path to the benchmark script file. If not provided, the default script at {PACKAGE}/benchmark/benchmarks.jl will be used.
  • tune::Bool=false: Whether to run benchmarks with tuning (default: false).
  • exeflags::Cmd=``: Additional execution flags for running the benchmark script (default: empty).
  • extra_pkgs::Vector{String}=String[]: Additional packages to add to the benchmark environment.
  • url::Union{String,Nothing}=nothing: URL of the package.
  • path::Union{String,Nothing}=nothing: Path to the package.
  • benchmark_on::Union{String,Nothing}=nothing: If the benchmark script file is to be downloaded, this specifies the revision to use.
  • filter_benchmarks::Vector{String}=String[]: Filter the benchmarks to run (default: all).
  • nsamples_load_time::Int=5: Number of samples to take for the time-to-load benchmark.
source
AirspeedVelocity.Utils.benchmarkMethod
benchmark(package_specs::Union{PackageSpec,Vector{PackageSpec}}; output_dir::String=".", script::Union{String,Nothing}=nothing, tune::Bool=false, exeflags::Cmd=``, extra_pkgs::Vector{String}=String[])

Run benchmarks for a given Julia package.

This function runs the benchmarks specified in the script for the package defined by the package_spec. If script is not provided, the function will use the default benchmark script located at {PACKAGE_SRC_DIR}/benchmark/benchmarks.jl.

The benchmarks are run using the SUITE variable defined in the benchmark script, which should be of type BenchmarkTools.BenchmarkGroup. The benchmarks can be run with or without tuning depending on the value of the tune argument.

The results of the benchmarks are saved to a JSON file named results_packagename@rev.json in the specified output_dir.

Arguments

  • package::Union{PackageSpec,Vector{PackageSpec}}: The package specification containing information about the package for which to run the benchmarks. You can also pass a vector of package specifications to run benchmarks for multiple versions of a package.
  • output_dir::String=".": The directory where the benchmark results JSON file will be saved (default: current directory).
  • script::Union{String,Nothing}=nothing: The path to the benchmark script file. If not provided, the default script at {PACKAGE}/benchmark/benchmarks.jl will be used.
  • tune::Bool=false: Whether to run benchmarks with tuning (default: false).
  • exeflags::Cmd=``: Additional execution flags for running the benchmark script (default: empty).
  • extra_pkgs::Vector{String}=String[]: Additional packages to add to the benchmark environment.
  • benchmark_on::Union{String,Nothing}=nothing: If the benchmark script file is to be downloaded, this specifies the revision to use.
  • filter_benchmarks::Vector{String}=String[]: Filter the benchmarks to run (default: all).
  • nsamples_load_time::Int=5: Number of samples to take for the time-to-load benchmark.
source

Loading and visualizing benchmarks

From the command line:

AirspeedVelocity.BenchPkgTable.benchpkgtableFunction
benchpkgtable [package_name] [-r --rev <arg>]
+                             [-i --input-dir <arg>]
+                             [--ratio]
+                             [--mode <arg>]
+                             [--url <arg>]
+                             [--path <arg>]

Print a table of the benchmarks of a package as created with benchpkg.

Arguments

  • package_name: Name of the package.

Options

  • -r, --rev <arg>: Revisions to test (delimit by comma). The default is {DEFAULT},dirty, which will attempt to find the default branch of the package.
  • -i, --input-dir <arg>: Where the JSON results were saved (default: ".").
  • --url <arg>: URL of the package. Only used to get the package name.
  • --path <arg>: Path of the package. The default is . if other arguments are not given. Only used to get the package name.

Flags

  • --ratio: Whether to include the ratio (default: false). Only applies when comparing two revisions.
  • --mode: Table mode(s). Valid values are "time" (default), to print the benchmark time, or "memory", to print the allocation and memory usage. Both options can be passed, if delimited by comma.
source
AirspeedVelocity.BenchPkgPlot.benchpkgplotFunction
benchpkgplot package_name [-r --rev <arg>] [-i --input-dir <arg>]
                           [-o --output-dir <arg>] [-n --npart <arg>]
-                          [--format <arg>]

Plot the benchmarks of a package as created with benchpkg.

Arguments

  • package_name: Name of the package.

Options

  • -r, --rev <arg>: Revisions to test (delimit by comma).
  • -i, --input-dir <arg>: Where the JSON results were saved (default: ".").
  • -o, --output-dir <arg>: Where to save the plots results (default: ".").
  • -n, --npart <arg>: Max number of plots per page (default: 10).
  • --format <arg>: File type to save the plots as (default: "png").
source
AirspeedVelocity.Utils.load_resultsMethod
load_results(specs::Vector{PackageSpec}; input_dir::String=".")

Load the results from JSON files for each PackageSpec in the specs vector. The function assumes that the JSON files are located in the input_dir directory and are named as "results_{s}.json" where s is equal to PackageName@Rev.

The function returns a combined OrderedDict, to be input to the combined_plots function.

Arguments

  • specs::Vector{PackageSpec}: Vector of each package revision to be loaded (as PackageSpec).
  • input_dir::String=".": Directory where the results. Default is current directory.

Returns

  • OrderedDict{String,OrderedDict}: Combined results ready to be passed to the combined_plots function.
source
AirspeedVelocity.PlotUtils.combined_plotsMethod
combined_plots(combined_results::OrderedDict; npart=10)

Create a combined plot of the results loaded from the load_results function. The function partitions the plots into smaller groups of size npart (defaults to 10) and combines the plots in each group vertically. It returns an array of combined plots.

Arguments

  • combined_results::OrderedDict: Data to be plotted, obtained from the load_results function.
  • npart::Int=10: Max plots to be combined in a single vertical group. Default is 10.

Returns

  • Array{Plotly.Plot,1}: An array of combined Plots objects, with each element representing a group of up to npart vertical plots.
source
AirspeedVelocity.TableUtils.create_tableMethod
create_table(combined_results::OrderedDict; kws...)

Create a markdown table of the results loaded from the load_results function. If there are two results for a given benchmark, will have an additional column for the comparison, assuming the first revision is one to compare against.

The formatter keyword argument generates the column value. It defaults to TableUtils.format_time, which prints the median time ± the interquantile range. TableUtils.format_memory is also available to print the number of allocations and the allocated memory.

source
+ [--format <arg>]

Plot the benchmarks of a package as created with benchpkg.

Arguments

  • package_name: Name of the package.

Options

  • -r, --rev <arg>: Revisions to test (delimit by comma).
  • -i, --input-dir <arg>: Where the JSON results were saved (default: ".").
  • -o, --output-dir <arg>: Where to save the plots results (default: ".").
  • -n, --npart <arg>: Max number of plots per page (default: 10).
  • --format <arg>: File type to save the plots as (default: "png").
source
AirspeedVelocity.Utils.load_resultsMethod
load_results(specs::Vector{PackageSpec}; input_dir::String=".")

Load the results from JSON files for each PackageSpec in the specs vector. The function assumes that the JSON files are located in the input_dir directory and are named as "results_{s}.json" where s is equal to PackageName@Rev.

The function returns a combined OrderedDict, to be input to the combined_plots function.

Arguments

  • specs::Vector{PackageSpec}: Vector of each package revision to be loaded (as PackageSpec).
  • input_dir::String=".": Directory where the results. Default is current directory.

Returns

  • OrderedDict{String,OrderedDict}: Combined results ready to be passed to the combined_plots function.
source
AirspeedVelocity.PlotUtils.combined_plotsMethod
combined_plots(combined_results::OrderedDict; npart=10)

Create a combined plot of the results loaded from the load_results function. The function partitions the plots into smaller groups of size npart (defaults to 10) and combines the plots in each group vertically. It returns an array of combined plots.

Arguments

  • combined_results::OrderedDict: Data to be plotted, obtained from the load_results function.
  • npart::Int=10: Max plots to be combined in a single vertical group. Default is 10.

Returns

  • Array{Plotly.Plot,1}: An array of combined Plots objects, with each element representing a group of up to npart vertical plots.
source
AirspeedVelocity.TableUtils.create_tableMethod
create_table(combined_results::OrderedDict; kws...)

Create a markdown table of the results loaded from the load_results function. If there are two results for a given benchmark, will have an additional column for the comparison, assuming the first revision is one to compare against.

The formatter keyword argument generates the column value. It defaults to TableUtils.format_time, which prints the median time ± the interquantile range. TableUtils.format_memory is also available to print the number of allocations and the allocated memory.

source
diff --git a/dev/index.html b/dev/index.html index ed121eb..fb09047 100644 --- a/dev/index.html +++ b/dev/index.html @@ -1,7 +1,7 @@ -Home · AirspeedVelocity.jl

AirspeedVelocity.jl

Stable Dev Build Status Coverage

AirspeedVelocity.jl strives to make it easy to benchmark Julia packages over their lifetime. It is inspired by asv.

This package allows you to:

  • Generate benchmarks directly from the terminal with an easy-to-use CLI.
  • Compare many commits/tags/branches at once.
  • Plot those benchmarks, automatically flattening your benchmark suite into a list of plots with generated titles.
  • Run as a GitHub action to create benchmark comparisons for every submitted PR (in a bot comment).

This package also freezes the benchmark script at a particular revision, so there is no worry about the old history overwriting the benchmark.

Installation

You can install the CLI with:

julia -e 'using Pkg; Pkg.add("AirspeedVelocity"); Pkg.build("AirspeedVelocity")'

This will install two executables at ~/.julia/bin - make sure to have it on your PATH.

Examples

You may then use the CLI to generate benchmarks for any package with, e.g.,

benchpkg Transducers \
+Home · AirspeedVelocity.jl

AirspeedVelocity.jl

Stable Dev Build Status Coverage

AirspeedVelocity.jl strives to make it easy to benchmark Julia packages over their lifetime. It is inspired by asv.

This package allows you to:

  • Generate benchmarks directly from the terminal with an easy-to-use CLI.
  • Compare many commits/tags/branches at once.
  • Plot those benchmarks, automatically flattening your benchmark suite into a list of plots with generated titles.
  • Run as a GitHub action to create benchmark comparisons for every submitted PR (in a bot comment).

This package also freezes the benchmark script at a particular revision, so there is no worry about the old history overwriting the benchmark.

Installation

You can install the CLI with:

julia -e 'using Pkg; Pkg.add("AirspeedVelocity"); Pkg.build("AirspeedVelocity")'

This will install two executables at ~/.julia/bin - make sure to have it on your PATH.

Examples

You may use the CLI to generate benchmarks for any package with, e.g.,

benchpkg

This will benchmark the package defined in the current directory at the current dirty state, against the default branch (i.e., main or master), over all benchmarks defined in benchmark/benchmarks.jl. It will then print a markdown table of the results while also saving the JSON results to the current directory.

You can configure all options with the CLI flags. For example, to benchmark the registered package Transducers.jl at the revisions v0.4.20, v0.4.70, and master, you can use:

benchpkg Transducers \
     --rev=v0.4.20,v0.4.70,master \
-    --bench-on=v0.4.20

which will benchmark Transducers.jl, at the revisions v0.4.20, v0.4.70, and master, using the benchmark script benchmark/benchmarks.jl as it was defined at v0.4.20, and then save the JSON results in the current directory.

We can view the results of the benchmark as a table with benchpkgtable:

benchpkgtable Transducers \
+    --bench-on=v0.4.20

This will further use the benchmark script benchmark/benchmarks.jl as it was defined at v0.4.20, and then save the JSON results in the current directory.

We can explicitly view the results of the benchmark as a table with benchpkgtable:

benchpkgtable Transducers \
     --rev=v0.4.20,v0.4.70,master

We can also generate plots of the revisions with:

benchpkgplot Transducers \
     --rev=v0.4.20,v0.4.70,master \
     --format=pdf \
@@ -30,31 +30,35 @@
     --exeflags="--threads=4 -O3"

where we have also specified the output directory and extra flags to pass to the julia executable. We can also now visualize this:

benchpkgplot SymbolicRegression \
     -r v0.15.3,v0.16.2 \
     -i results/ \
-    -o plots/

Using in CI

You can use this package in GitHub actions to benchmark every PR submitted to your package, by copying the example: .github/workflows/benchmark_pr.yml.

Every time a PR is submitted to your package, this workflow will run and generate plots of the performance of the PR against the default branch, as well as a markdown table, showing whether the PR improves or worsens performance:

regression_example

Usage

For running benchmarks, you can use the benchpkg command, which is built into the ~/.julia/bin folder:

    benchpkg package_name [-r --rev <arg>]
-                          [--url <arg>]
-                          [--path <arg>]
-                          [-o, --output-dir <arg>]
-                          [-e, --exeflags <arg>]
-                          [-a, --add <arg>]
-                          [-s, --script <arg>]
-                          [--bench-on <arg>]
-                          [-f, --filter <arg>]
-                          [--nsamples-load-time <arg>]
-                          [--tune]
+    -o plots/

Using in CI

You can use this package in GitHub actions to benchmark every PR submitted to your package, by copying the example: .github/workflows/benchmark_pr.yml.

Every time a PR is submitted to your package, this workflow will run and generate plots of the performance of the PR against the default branch, as well as a markdown table, showing whether the PR improves or worsens performance:

regression_example

Usage

For running benchmarks, you can use the benchpkg command, which is built into the ~/.julia/bin folder:

    benchpkg [package_name] [-r --rev <arg>]
+                            [--url <arg>]
+                            [--path <arg>]
+                            [-o, --output-dir <arg>]
+                            [-e, --exeflags <arg>]
+                            [-a, --add <arg>]
+                            [-s, --script <arg>]
+                            [--bench-on <arg>]
+                            [-f, --filter <arg>]
+                            [--nsamples-load-time <arg>]
+                            [--tune]
+                            [--dont-print]
 
 Benchmark a package over a set of revisions.
 
 # Arguments
 
-- `package_name`: Name of the package.
+- `package_name`: Name of the package. If not given, the package is assumed to be
+  the current directory.
 
 # Options
 
 - `-r, --rev <arg>`: Revisions to test (delimit by comma). Use `dirty` to
   benchmark the current state of the package at `path` (and not a git commit).
+  The default is `{DEFAULT},dirty`, which will attempt to find the default branch
+  of the package.
 - `--url <arg>`: URL of the package.
-- `--path <arg>`: Path of the package.
-- `-o, --output-dir <arg>`: Where to save the JSON results.
+- `--path <arg>`: Path of the package. The default is `.` if other arguments are not given.
+- `-o, --output-dir <arg>`: Where to save the JSON results. The default is `.`.
 - `-e, --exeflags <arg>`: CLI flags for Julia (default: none).
 - `-a, --add <arg>`: Extra packages needed (delimit by comma).
 - `-s, --script <arg>`: The benchmark script. Default: `benchmark/benchmarks.jl` downloaded from `stable`.
@@ -63,11 +67,16 @@
 - `-f, --filter <arg>`: Filter the benchmarks to run (delimit by comma).
 - `--nsamples-load-time <arg>`: Number of samples to take when measuring load time of
     the package (default: 5). (This means starting a Julia process for each sample.)
+- `--dont-print`: Don't print the table.
 
 # Flags
 
-- `--tune`: Whether to run benchmarks with tuning (default: false). 

You can also just generate a table:

    benchpkgtable package_name [-r --rev <arg>] [-i --input-dir <arg>]
-                               [--ratio]
+- `--tune`: Whether to run benchmarks with tuning (default: false).

You can also just generate a table:

    benchpkgtable [package_name] [-r --rev <arg>]
+                                 [-i --input-dir <arg>]
+                                 [--ratio]
+                                 [--mode <arg>]
+                                 [--url <arg>]
+                                 [--path <arg>]
 
 Print a table of the benchmarks of a package as created with `benchpkg`.
 
@@ -78,13 +87,23 @@
 # Options
 
 - `-r, --rev <arg>`: Revisions to test (delimit by comma).
+  The default is `{DEFAULT},dirty`, which will attempt to find the default branch
+  of the package.
 - `-i, --input-dir <arg>`: Where the JSON results were saved (default: ".").
+- `--url <arg>`: URL of the package. Only used to get the package name.
+- `--path <arg>`: Path of the package. The default is `.` if other arguments are not given.
+   Only used to get the package name.
 
 # Flags
 
 - `--ratio`: Whether to include the ratio (default: false). Only applies when
-    comparing two revisions.

For plotting, you can use the benchpkgplot function:

    benchpkgplot package_name [-r --rev <arg>] [-i --input-dir <arg>]
-                              [-o --output-dir <arg>] [-n --npart <arg>]
+    comparing two revisions.
+- `--mode`: Table mode(s). Valid values are "time" (default), to print the
+    benchmark time, or "memory", to print the allocation and memory usage.
+    Both options can be passed, if delimited by comma.

For plotting, you can use the benchpkgplot function:

    benchpkgplot package_name [-r --rev <arg>]
+                              [-i --input-dir <arg>]
+                              [-o --output-dir <arg>]
+                              [-n --npart <arg>]
                               [--format <arg>]
 
 Plot the benchmarks of a package as created with `benchpkg`.
@@ -99,4 +118,4 @@
 - `-i, --input-dir <arg>`: Where the JSON results were saved (default: ".").
 - `-o, --output-dir <arg>`: Where to save the plots results (default: ".").
 - `-n, --npart <arg>`: Max number of plots per page (default: 10).
-- `--format <arg>`: File type to save the plots as (default: "png").

If you prefer to use the Julia API, you can use the benchmark function for generating data. The API is given here.

Also be sure to check out PkgBenchmark.jl. PkgBenchmark.jl is a simple wrapper of BenchmarkTools.jl to interface it with Git, and is a good choice for building custom analysis workflows.

However, for me this wrapper is a bit too thin, which is why I created this package. AirspeedVelocity.jl tries to have more features and workflows readily-available. It also emphasizes a CLI (though there is a Julia API), as my subjective view is that this is more suitable for interacting side-by-side with git.

+- `--format <arg>`: File type to save the plots as (default: "png").

If you prefer to use the Julia API, you can use the benchmark function for generating data. The API is given here.

Also be sure to check out PkgBenchmark.jl. PkgBenchmark.jl is a simple wrapper of BenchmarkTools.jl to interface it with Git, and is a good choice for building custom analysis workflows.

However, for me this wrapper is a bit too thin, which is why I created this package. AirspeedVelocity.jl tries to have more features and workflows readily-available. It also emphasizes a CLI (though there is a Julia API), as my subjective view is that this is more suitable for interacting side-by-side with git.

diff --git a/dev/objects.inv b/dev/objects.inv index ae70781003b065ac499d96b7b924842b3a843c5d..18bd52ebd6e4e736191ea9320a2311be211af30f 100644 GIT binary patch delta 14 VcmbQiGJ|D;C$pKJ!A74mi~u0!1aklY delta 14 VcmbQiGJ|D;C$p)Z@kXCBi~u0+1a$xa diff --git a/dev/search_index.js b/dev/search_index.js index 77a08f9..a97e281 100644 --- a/dev/search_index.js +++ b/dev/search_index.js @@ -1,3 +1,3 @@ var documenterSearchIndex = {"docs": -[{"location":"_index/","page":"-","title":"-","text":"CurrentModule = AirspeedVelocity","category":"page"},{"location":"_index/","page":"-","title":"-","text":"","category":"page"},{"location":"_index/","page":"-","title":"-","text":"Pages = [\"api.md\"]","category":"page"},{"location":"_index/","page":"-","title":"-","text":"Modules = [AirspeedVelocity]","category":"page"},{"location":"api/#API","page":"API","title":"API","text":"","category":"section"},{"location":"api/#Creating-benchmarks","page":"API","title":"Creating benchmarks","text":"","category":"section"},{"location":"api/","page":"API","title":"API","text":"From the command line:","category":"page"},{"location":"api/","page":"API","title":"API","text":"benchpkg","category":"page"},{"location":"api/#AirspeedVelocity.BenchPkg.benchpkg","page":"API","title":"AirspeedVelocity.BenchPkg.benchpkg","text":"benchpkg package_name [-r --rev ]\n [--url ]\n [--path ]\n [-o, --output-dir ]\n [-e, --exeflags ]\n [-a, --add ]\n [-s, --script ]\n [--bench-on ]\n [-f, --filter ]\n [--nsamples-load-time ]\n [--tune]\n\nBenchmark a package over a set of revisions.\n\nArguments\n\npackage_name: Name of the package.\n\nOptions\n\n-r, --rev : Revisions to test (delimit by comma). Use dirty to benchmark the current state of the package at path (and not a git commit).\n--url : URL of the package.\n--path : Path of the package.\n-o, --output-dir : Where to save the JSON results.\n-e, --exeflags : CLI flags for Julia (default: none).\n-a, --add : Extra packages needed (delimit by comma).\n-s, --script : The benchmark script. Default: benchmark/benchmarks.jl downloaded from stable.\n--bench-on : If the script is not set, this specifies the revision at which to download benchmark/benchmarks.jl from the package.\n-f, --filter : Filter the benchmarks to run (delimit by comma).\n--nsamples-load-time : Number of samples to take when measuring load time of the package (default: 5). (This means starting a Julia process for each sample.)\n\nFlags\n\n--tune: Whether to run benchmarks with tuning (default: false).\n\n\n\n\n\n","category":"function"},{"location":"api/","page":"API","title":"API","text":"Or, directly from Julia:","category":"page"},{"location":"api/","page":"API","title":"API","text":"benchmark(package_name::String, rev::Vector{String}; output_dir::String=\".\", script::Union{String,Nothing}=nothing, tune::Bool=false, exeflags::Cmd=``, extra_pkgs::Vector{String}=String[])","category":"page"},{"location":"api/#AirspeedVelocity.Utils.benchmark-Tuple{String, Vector{String}}","page":"API","title":"AirspeedVelocity.Utils.benchmark","text":"benchmark(package_name::String, rev::Union{String,Vector{String}}; output_dir::String=\".\", script::Union{String,Nothing}=nothing, tune::Bool=false, exeflags::Cmd=``, extra_pkgs::Vector{String}=String[])\n\nRun benchmarks for a given Julia package.\n\nThis function runs the benchmarks specified in the script for the package defined by the package_spec. If script is not provided, the function will use the default benchmark script located at {PACKAGE_SRC_DIR}/benchmark/benchmarks.jl.\n\nThe benchmarks are run using the SUITE variable defined in the benchmark script, which should be of type BenchmarkTools.BenchmarkGroup. The benchmarks can be run with or without tuning depending on the value of the tune argument.\n\nThe results of the benchmarks are saved to a JSON file named results_packagename@rev.json in the specified output_dir.\n\nArguments\n\npackage_name::String: The name of the package for which to run the benchmarks.\nrev::Union{String,Vector{String}}: The revision of the package for which to run the benchmarks. You can also pass a vector of revisions to run benchmarks for multiple versions of a package.\noutput_dir::String=\".\": The directory where the benchmark results JSON file will be saved (default: current directory).\nscript::Union{String,Nothing}=nothing: The path to the benchmark script file. If not provided, the default script at {PACKAGE}/benchmark/benchmarks.jl will be used.\ntune::Bool=false: Whether to run benchmarks with tuning (default: false).\nexeflags::Cmd=``: Additional execution flags for running the benchmark script (default: empty).\nextra_pkgs::Vector{String}=String[]: Additional packages to add to the benchmark environment.\nurl::Union{String,Nothing}=nothing: URL of the package.\npath::Union{String,Nothing}=nothing: Path to the package.\nbenchmark_on::Union{String,Nothing}=nothing: If the benchmark script file is to be downloaded, this specifies the revision to use.\nfilter_benchmarks::Vector{String}=String[]: Filter the benchmarks to run (default: all).\nnsamples_load_time::Int=5: Number of samples to take for the time-to-load benchmark.\n\n\n\n\n\n","category":"method"},{"location":"api/","page":"API","title":"API","text":"benchmark(package_specs::Vector{PackageSpec}; output_dir::String = \".\", script::Union{String,Nothing} = nothing, tune::Bool = false, exeflags::Cmd = ``, extra_pkgs = String[])","category":"page"},{"location":"api/#AirspeedVelocity.Utils.benchmark-Tuple{Vector{Pkg.Types.PackageSpec}}","page":"API","title":"AirspeedVelocity.Utils.benchmark","text":"benchmark(package_specs::Union{PackageSpec,Vector{PackageSpec}}; output_dir::String=\".\", script::Union{String,Nothing}=nothing, tune::Bool=false, exeflags::Cmd=``, extra_pkgs::Vector{String}=String[])\n\nRun benchmarks for a given Julia package.\n\nThis function runs the benchmarks specified in the script for the package defined by the package_spec. If script is not provided, the function will use the default benchmark script located at {PACKAGE_SRC_DIR}/benchmark/benchmarks.jl.\n\nThe benchmarks are run using the SUITE variable defined in the benchmark script, which should be of type BenchmarkTools.BenchmarkGroup. The benchmarks can be run with or without tuning depending on the value of the tune argument.\n\nThe results of the benchmarks are saved to a JSON file named results_packagename@rev.json in the specified output_dir.\n\nArguments\n\npackage::Union{PackageSpec,Vector{PackageSpec}}: The package specification containing information about the package for which to run the benchmarks. You can also pass a vector of package specifications to run benchmarks for multiple versions of a package.\noutput_dir::String=\".\": The directory where the benchmark results JSON file will be saved (default: current directory).\nscript::Union{String,Nothing}=nothing: The path to the benchmark script file. If not provided, the default script at {PACKAGE}/benchmark/benchmarks.jl will be used.\ntune::Bool=false: Whether to run benchmarks with tuning (default: false).\nexeflags::Cmd=``: Additional execution flags for running the benchmark script (default: empty).\nextra_pkgs::Vector{String}=String[]: Additional packages to add to the benchmark environment.\nbenchmark_on::Union{String,Nothing}=nothing: If the benchmark script file is to be downloaded, this specifies the revision to use.\nfilter_benchmarks::Vector{String}=String[]: Filter the benchmarks to run (default: all).\nnsamples_load_time::Int=5: Number of samples to take for the time-to-load benchmark.\n\n\n\n\n\n","category":"method"},{"location":"api/#Loading-and-visualizing-benchmarks","page":"API","title":"Loading and visualizing benchmarks","text":"","category":"section"},{"location":"api/","page":"API","title":"API","text":"From the command line:","category":"page"},{"location":"api/","page":"API","title":"API","text":"benchpkgtable\nbenchpkgplot","category":"page"},{"location":"api/#AirspeedVelocity.BenchPkgTable.benchpkgtable","page":"API","title":"AirspeedVelocity.BenchPkgTable.benchpkgtable","text":"benchpkgtable package_name [-r --rev ] [-i --input-dir ]\n [--ratio] [--mode ]\n\nPrint a table of the benchmarks of a package as created with benchpkg.\n\nArguments\n\npackage_name: Name of the package.\n\nOptions\n\n-r, --rev : Revisions to test (delimit by comma).\n-i, --input-dir : Where the JSON results were saved (default: \".\").\n\nFlags\n\n--ratio: Whether to include the ratio (default: false). Only applies when comparing two revisions.\n--mode: Table mode(s). Valid values are \"time\" (default), to print the benchmark time, or \"memory\", to print the allocation and memory usage. Both options can be passed, if delimited by comma.\n\n\n\n\n\n","category":"function"},{"location":"api/#AirspeedVelocity.BenchPkgPlot.benchpkgplot","page":"API","title":"AirspeedVelocity.BenchPkgPlot.benchpkgplot","text":"benchpkgplot package_name [-r --rev ] [-i --input-dir ]\n [-o --output-dir ] [-n --npart ]\n [--format ]\n\nPlot the benchmarks of a package as created with benchpkg.\n\nArguments\n\npackage_name: Name of the package.\n\nOptions\n\n-r, --rev : Revisions to test (delimit by comma).\n-i, --input-dir : Where the JSON results were saved (default: \".\").\n-o, --output-dir : Where to save the plots results (default: \".\").\n-n, --npart : Max number of plots per page (default: 10).\n--format : File type to save the plots as (default: \"png\").\n\n\n\n\n\n","category":"function"},{"location":"api/","page":"API","title":"API","text":"load_results(specs::Vector{PackageSpec}; input_dir::String=\".\")","category":"page"},{"location":"api/#AirspeedVelocity.Utils.load_results-Tuple{Vector{Pkg.Types.PackageSpec}}","page":"API","title":"AirspeedVelocity.Utils.load_results","text":"load_results(specs::Vector{PackageSpec}; input_dir::String=\".\")\n\nLoad the results from JSON files for each PackageSpec in the specs vector. The function assumes that the JSON files are located in the input_dir directory and are named as \"results_{s}.json\" where s is equal to PackageName@Rev.\n\nThe function returns a combined OrderedDict, to be input to the combined_plots function.\n\nArguments\n\nspecs::Vector{PackageSpec}: Vector of each package revision to be loaded (as PackageSpec).\ninput_dir::String=\".\": Directory where the results. Default is current directory.\n\nReturns\n\nOrderedDict{String,OrderedDict}: Combined results ready to be passed to the combined_plots function.\n\n\n\n\n\n","category":"method"},{"location":"api/","page":"API","title":"API","text":"combined_plots(combined_results::OrderedDict; npart=10)","category":"page"},{"location":"api/#AirspeedVelocity.PlotUtils.combined_plots-Tuple{OrderedDict}","page":"API","title":"AirspeedVelocity.PlotUtils.combined_plots","text":"combined_plots(combined_results::OrderedDict; npart=10)\n\nCreate a combined plot of the results loaded from the load_results function. The function partitions the plots into smaller groups of size npart (defaults to 10) and combines the plots in each group vertically. It returns an array of combined plots.\n\nArguments\n\ncombined_results::OrderedDict: Data to be plotted, obtained from the load_results function.\nnpart::Int=10: Max plots to be combined in a single vertical group. Default is 10.\n\nReturns\n\nArray{Plotly.Plot,1}: An array of combined Plots objects, with each element representing a group of up to npart vertical plots.\n\n\n\n\n\n","category":"method"},{"location":"api/","page":"API","title":"API","text":"create_table(combined_results::OrderedDict; kws...)","category":"page"},{"location":"api/#AirspeedVelocity.TableUtils.create_table-Tuple{OrderedDict}","page":"API","title":"AirspeedVelocity.TableUtils.create_table","text":"create_table(combined_results::OrderedDict; kws...)\n\nCreate a markdown table of the results loaded from the load_results function. If there are two results for a given benchmark, will have an additional column for the comparison, assuming the first revision is one to compare against.\n\nThe formatter keyword argument generates the column value. It defaults to TableUtils.format_time, which prints the median time ± the interquantile range. TableUtils.format_memory is also available to print the number of allocations and the allocated memory.\n\n\n\n\n\n","category":"method"},{"location":"","page":"Home","title":"Home","text":"CurrentModule = AirspeedVelocity","category":"page"},{"location":"#AirspeedVelocity.jl","page":"Home","title":"AirspeedVelocity.jl","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"(Image: Stable) (Image: Dev) (Image: Build Status) (Image: Coverage)","category":"page"},{"location":"","page":"Home","title":"Home","text":"AirspeedVelocity.jl strives to make it easy to benchmark Julia packages over their lifetime. It is inspired by asv.","category":"page"},{"location":"","page":"Home","title":"Home","text":"This package allows you to:","category":"page"},{"location":"","page":"Home","title":"Home","text":"Generate benchmarks directly from the terminal with an easy-to-use CLI.\nCompare many commits/tags/branches at once.\nPlot those benchmarks, automatically flattening your benchmark suite into a list of plots with generated titles.\nRun as a GitHub action to create benchmark comparisons for every submitted PR (in a bot comment).","category":"page"},{"location":"","page":"Home","title":"Home","text":"This package also freezes the benchmark script at a particular revision, so there is no worry about the old history overwriting the benchmark.","category":"page"},{"location":"#Installation","page":"Home","title":"Installation","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"You can install the CLI with:","category":"page"},{"location":"","page":"Home","title":"Home","text":"julia -e 'using Pkg; Pkg.add(\"AirspeedVelocity\"); Pkg.build(\"AirspeedVelocity\")'","category":"page"},{"location":"","page":"Home","title":"Home","text":"This will install two executables at ~/.julia/bin - make sure to have it on your PATH.","category":"page"},{"location":"#Examples","page":"Home","title":"Examples","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"You may then use the CLI to generate benchmarks for any package with, e.g.,","category":"page"},{"location":"","page":"Home","title":"Home","text":"benchpkg Transducers \\\n --rev=v0.4.20,v0.4.70,master \\\n --bench-on=v0.4.20","category":"page"},{"location":"","page":"Home","title":"Home","text":"which will benchmark Transducers.jl, at the revisions v0.4.20, v0.4.70, and master, using the benchmark script benchmark/benchmarks.jl as it was defined at v0.4.20, and then save the JSON results in the current directory.","category":"page"},{"location":"","page":"Home","title":"Home","text":"We can view the results of the benchmark as a table with benchpkgtable:","category":"page"},{"location":"","page":"Home","title":"Home","text":"benchpkgtable Transducers \\\n --rev=v0.4.20,v0.4.70,master","category":"page"},{"location":"","page":"Home","title":"Home","text":"We can also generate plots of the revisions with:","category":"page"},{"location":"","page":"Home","title":"Home","text":"benchpkgplot Transducers \\\n --rev=v0.4.20,v0.4.70,master \\\n --format=pdf \\\n --npart=5","category":"page"},{"location":"","page":"Home","title":"Home","text":"which will generate a pdf file for each set of 5 plots, showing the change with each revision:","category":"page"},{"location":"","page":"Home","title":"Home","text":"\"Screenshot","category":"page"},{"location":"","page":"Home","title":"Home","text":"You can also provide a custom benchmark. For example, let's say you have a file script.jl, defining a benchmark for SymbolicRegression.jl (we always need to define the SUITE variable as a BenchmarkGroup):","category":"page"},{"location":"","page":"Home","title":"Home","text":"using BenchmarkTools, SymbolicRegression\nconst SUITE = BenchmarkGroup()\n\n# Create hierarchy of benchmarks:\nSUITE[\"eval_tree_array\"] = BenchmarkGroup()\n\noptions = Options(; binary_operators=[+, -, *], unary_operators=[cos])\ntree = Node(; feature=1) + cos(3.2f0 * Node(; feature=2))\n\n\nfor n in [10, 20]\n SUITE[\"eval_tree_array\"][n] = @benchmarkable(\n eval_tree_array($tree, X, $options),\n evals=10,\n samples=1000,\n setup=(X=randn(Float32, 2, $n))\n )\nend\n","category":"page"},{"location":"","page":"Home","title":"Home","text":"Inside this script, we will also have access to the PACKAGE_VERSION constant, to allow for different behavior depending on tag. We can run this benchmark over the history of SymbolicRegression.jl with:","category":"page"},{"location":"","page":"Home","title":"Home","text":"benchpkg SymbolicRegression \\\n -r v0.15.3,v0.16.2 \\\n -s script.jl \\\n -o results/ \\\n --exeflags=\"--threads=4 -O3\"","category":"page"},{"location":"","page":"Home","title":"Home","text":"where we have also specified the output directory and extra flags to pass to the julia executable. We can also now visualize this:","category":"page"},{"location":"","page":"Home","title":"Home","text":"benchpkgplot SymbolicRegression \\\n -r v0.15.3,v0.16.2 \\\n -i results/ \\\n -o plots/","category":"page"},{"location":"#Using-in-CI","page":"Home","title":"Using in CI","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"You can use this package in GitHub actions to benchmark every PR submitted to your package, by copying the example: .github/workflows/benchmark_pr.yml.","category":"page"},{"location":"","page":"Home","title":"Home","text":"Every time a PR is submitted to your package, this workflow will run and generate plots of the performance of the PR against the default branch, as well as a markdown table, showing whether the PR improves or worsens performance:","category":"page"},{"location":"","page":"Home","title":"Home","text":"(Image: regression_example)","category":"page"},{"location":"#Usage","page":"Home","title":"Usage","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"For running benchmarks, you can use the benchpkg command, which is built into the ~/.julia/bin folder:","category":"page"},{"location":"","page":"Home","title":"Home","text":" benchpkg package_name [-r --rev ]\n [--url ]\n [--path ]\n [-o, --output-dir ]\n [-e, --exeflags ]\n [-a, --add ]\n [-s, --script ]\n [--bench-on ]\n [-f, --filter ]\n [--nsamples-load-time ]\n [--tune]\n\nBenchmark a package over a set of revisions.\n\n# Arguments\n\n- `package_name`: Name of the package.\n\n# Options\n\n- `-r, --rev `: Revisions to test (delimit by comma). Use `dirty` to\n benchmark the current state of the package at `path` (and not a git commit).\n- `--url `: URL of the package.\n- `--path `: Path of the package.\n- `-o, --output-dir `: Where to save the JSON results.\n- `-e, --exeflags `: CLI flags for Julia (default: none).\n- `-a, --add `: Extra packages needed (delimit by comma).\n- `-s, --script `: The benchmark script. Default: `benchmark/benchmarks.jl` downloaded from `stable`.\n- `--bench-on `: If the script is not set, this specifies the revision at which\n to download `benchmark/benchmarks.jl` from the package.\n- `-f, --filter `: Filter the benchmarks to run (delimit by comma).\n- `--nsamples-load-time `: Number of samples to take when measuring load time of\n the package (default: 5). (This means starting a Julia process for each sample.)\n\n# Flags\n\n- `--tune`: Whether to run benchmarks with tuning (default: false). ","category":"page"},{"location":"","page":"Home","title":"Home","text":"You can also just generate a table:","category":"page"},{"location":"","page":"Home","title":"Home","text":" benchpkgtable package_name [-r --rev ] [-i --input-dir ]\n [--ratio]\n\nPrint a table of the benchmarks of a package as created with `benchpkg`.\n\n# Arguments\n\n- `package_name`: Name of the package.\n\n# Options\n\n- `-r, --rev `: Revisions to test (delimit by comma).\n- `-i, --input-dir `: Where the JSON results were saved (default: \".\").\n\n# Flags\n\n- `--ratio`: Whether to include the ratio (default: false). Only applies when\n comparing two revisions.","category":"page"},{"location":"","page":"Home","title":"Home","text":"For plotting, you can use the benchpkgplot function:","category":"page"},{"location":"","page":"Home","title":"Home","text":" benchpkgplot package_name [-r --rev ] [-i --input-dir ]\n [-o --output-dir ] [-n --npart ]\n [--format ]\n\nPlot the benchmarks of a package as created with `benchpkg`.\n\n# Arguments\n\n- `package_name`: Name of the package.\n\n# Options\n\n- `-r, --rev `: Revisions to test (delimit by comma).\n- `-i, --input-dir `: Where the JSON results were saved (default: \".\").\n- `-o, --output-dir `: Where to save the plots results (default: \".\").\n- `-n, --npart `: Max number of plots per page (default: 10).\n- `--format `: File type to save the plots as (default: \"png\").","category":"page"},{"location":"","page":"Home","title":"Home","text":"If you prefer to use the Julia API, you can use the benchmark function for generating data. The API is given here.","category":"page"},{"location":"#Related-packages","page":"Home","title":"Related packages","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"Also be sure to check out PkgBenchmark.jl. PkgBenchmark.jl is a simple wrapper of BenchmarkTools.jl to interface it with Git, and is a good choice for building custom analysis workflows.","category":"page"},{"location":"","page":"Home","title":"Home","text":"However, for me this wrapper is a bit too thin, which is why I created this package. AirspeedVelocity.jl tries to have more features and workflows readily-available. It also emphasizes a CLI (though there is a Julia API), as my subjective view is that this is more suitable for interacting side-by-side with git.","category":"page"},{"location":"","page":"Home","title":"Home","text":"Pages = [\"api.md\"]","category":"page"},{"location":"","page":"Home","title":"Home","text":"Modules = [AirspeedVelocity]","category":"page"}] +[{"location":"_index/","page":"-","title":"-","text":"CurrentModule = AirspeedVelocity","category":"page"},{"location":"_index/","page":"-","title":"-","text":"","category":"page"},{"location":"_index/","page":"-","title":"-","text":"Pages = [\"api.md\"]","category":"page"},{"location":"_index/","page":"-","title":"-","text":"Modules = [AirspeedVelocity]","category":"page"},{"location":"api/#API","page":"API","title":"API","text":"","category":"section"},{"location":"api/#Creating-benchmarks","page":"API","title":"Creating benchmarks","text":"","category":"section"},{"location":"api/","page":"API","title":"API","text":"From the command line:","category":"page"},{"location":"api/","page":"API","title":"API","text":"benchpkg","category":"page"},{"location":"api/#AirspeedVelocity.BenchPkg.benchpkg","page":"API","title":"AirspeedVelocity.BenchPkg.benchpkg","text":"benchpkg [package_name] [-r --rev ]\n [--url ]\n [--path ]\n [-o, --output-dir ]\n [-e, --exeflags ]\n [-a, --add ]\n [-s, --script ]\n [--bench-on ]\n [-f, --filter ]\n [--nsamples-load-time ]\n [--tune]\n [--dont-print]\n\nBenchmark a package over a set of revisions.\n\nArguments\n\npackage_name: Name of the package. If not given, the package is assumed to be the current directory.\n\nOptions\n\n-r, --rev : Revisions to test (delimit by comma). Use dirty to benchmark the current state of the package at path (and not a git commit). The default is {DEFAULT},dirty, which will attempt to find the default branch of the package.\n--url : URL of the package.\n--path : Path of the package. The default is . if other arguments are not given.\n-o, --output-dir : Where to save the JSON results. The default is ..\n-e, --exeflags : CLI flags for Julia (default: none).\n-a, --add : Extra packages needed (delimit by comma).\n-s, --script : The benchmark script. Default: benchmark/benchmarks.jl downloaded from stable.\n--bench-on : If the script is not set, this specifies the revision at which to download benchmark/benchmarks.jl from the package.\n-f, --filter : Filter the benchmarks to run (delimit by comma).\n--nsamples-load-time : Number of samples to take when measuring load time of the package (default: 5). (This means starting a Julia process for each sample.)\n--dont-print: Don't print the table.\n\nFlags\n\n--tune: Whether to run benchmarks with tuning (default: false).\n\n\n\n\n\n","category":"function"},{"location":"api/","page":"API","title":"API","text":"Or, directly from Julia:","category":"page"},{"location":"api/","page":"API","title":"API","text":"benchmark(package_name::String, rev::Vector{String}; output_dir::String=\".\", script::Union{String,Nothing}=nothing, tune::Bool=false, exeflags::Cmd=``, extra_pkgs::Vector{String}=String[])","category":"page"},{"location":"api/#AirspeedVelocity.Utils.benchmark-Tuple{String, Vector{String}}","page":"API","title":"AirspeedVelocity.Utils.benchmark","text":"benchmark(package_name::String, rev::Union{String,Vector{String}}; output_dir::String=\".\", script::Union{String,Nothing}=nothing, tune::Bool=false, exeflags::Cmd=``, extra_pkgs::Vector{String}=String[])\n\nRun benchmarks for a given Julia package.\n\nThis function runs the benchmarks specified in the script for the package defined by the package_spec. If script is not provided, the function will use the default benchmark script located at {PACKAGE_SRC_DIR}/benchmark/benchmarks.jl.\n\nThe benchmarks are run using the SUITE variable defined in the benchmark script, which should be of type BenchmarkTools.BenchmarkGroup. The benchmarks can be run with or without tuning depending on the value of the tune argument.\n\nThe results of the benchmarks are saved to a JSON file named results_packagename@rev.json in the specified output_dir.\n\nArguments\n\npackage_name::String: The name of the package for which to run the benchmarks.\nrev::Union{String,Vector{String}}: The revision of the package for which to run the benchmarks. You can also pass a vector of revisions to run benchmarks for multiple versions of a package.\noutput_dir::String=\".\": The directory where the benchmark results JSON file will be saved (default: current directory).\nscript::Union{String,Nothing}=nothing: The path to the benchmark script file. If not provided, the default script at {PACKAGE}/benchmark/benchmarks.jl will be used.\ntune::Bool=false: Whether to run benchmarks with tuning (default: false).\nexeflags::Cmd=``: Additional execution flags for running the benchmark script (default: empty).\nextra_pkgs::Vector{String}=String[]: Additional packages to add to the benchmark environment.\nurl::Union{String,Nothing}=nothing: URL of the package.\npath::Union{String,Nothing}=nothing: Path to the package.\nbenchmark_on::Union{String,Nothing}=nothing: If the benchmark script file is to be downloaded, this specifies the revision to use.\nfilter_benchmarks::Vector{String}=String[]: Filter the benchmarks to run (default: all).\nnsamples_load_time::Int=5: Number of samples to take for the time-to-load benchmark.\n\n\n\n\n\n","category":"method"},{"location":"api/","page":"API","title":"API","text":"benchmark(package_specs::Vector{PackageSpec}; output_dir::String = \".\", script::Union{String,Nothing} = nothing, tune::Bool = false, exeflags::Cmd = ``, extra_pkgs = String[])","category":"page"},{"location":"api/#AirspeedVelocity.Utils.benchmark-Tuple{Vector{Pkg.Types.PackageSpec}}","page":"API","title":"AirspeedVelocity.Utils.benchmark","text":"benchmark(package_specs::Union{PackageSpec,Vector{PackageSpec}}; output_dir::String=\".\", script::Union{String,Nothing}=nothing, tune::Bool=false, exeflags::Cmd=``, extra_pkgs::Vector{String}=String[])\n\nRun benchmarks for a given Julia package.\n\nThis function runs the benchmarks specified in the script for the package defined by the package_spec. If script is not provided, the function will use the default benchmark script located at {PACKAGE_SRC_DIR}/benchmark/benchmarks.jl.\n\nThe benchmarks are run using the SUITE variable defined in the benchmark script, which should be of type BenchmarkTools.BenchmarkGroup. The benchmarks can be run with or without tuning depending on the value of the tune argument.\n\nThe results of the benchmarks are saved to a JSON file named results_packagename@rev.json in the specified output_dir.\n\nArguments\n\npackage::Union{PackageSpec,Vector{PackageSpec}}: The package specification containing information about the package for which to run the benchmarks. You can also pass a vector of package specifications to run benchmarks for multiple versions of a package.\noutput_dir::String=\".\": The directory where the benchmark results JSON file will be saved (default: current directory).\nscript::Union{String,Nothing}=nothing: The path to the benchmark script file. If not provided, the default script at {PACKAGE}/benchmark/benchmarks.jl will be used.\ntune::Bool=false: Whether to run benchmarks with tuning (default: false).\nexeflags::Cmd=``: Additional execution flags for running the benchmark script (default: empty).\nextra_pkgs::Vector{String}=String[]: Additional packages to add to the benchmark environment.\nbenchmark_on::Union{String,Nothing}=nothing: If the benchmark script file is to be downloaded, this specifies the revision to use.\nfilter_benchmarks::Vector{String}=String[]: Filter the benchmarks to run (default: all).\nnsamples_load_time::Int=5: Number of samples to take for the time-to-load benchmark.\n\n\n\n\n\n","category":"method"},{"location":"api/#Loading-and-visualizing-benchmarks","page":"API","title":"Loading and visualizing benchmarks","text":"","category":"section"},{"location":"api/","page":"API","title":"API","text":"From the command line:","category":"page"},{"location":"api/","page":"API","title":"API","text":"benchpkgtable\nbenchpkgplot","category":"page"},{"location":"api/#AirspeedVelocity.BenchPkgTable.benchpkgtable","page":"API","title":"AirspeedVelocity.BenchPkgTable.benchpkgtable","text":"benchpkgtable [package_name] [-r --rev ]\n [-i --input-dir ]\n [--ratio]\n [--mode ]\n [--url ]\n [--path ]\n\nPrint a table of the benchmarks of a package as created with benchpkg.\n\nArguments\n\npackage_name: Name of the package.\n\nOptions\n\n-r, --rev : Revisions to test (delimit by comma). The default is {DEFAULT},dirty, which will attempt to find the default branch of the package.\n-i, --input-dir : Where the JSON results were saved (default: \".\").\n--url : URL of the package. Only used to get the package name.\n--path : Path of the package. The default is . if other arguments are not given. Only used to get the package name.\n\nFlags\n\n--ratio: Whether to include the ratio (default: false). Only applies when comparing two revisions.\n--mode: Table mode(s). Valid values are \"time\" (default), to print the benchmark time, or \"memory\", to print the allocation and memory usage. Both options can be passed, if delimited by comma.\n\n\n\n\n\n","category":"function"},{"location":"api/#AirspeedVelocity.BenchPkgPlot.benchpkgplot","page":"API","title":"AirspeedVelocity.BenchPkgPlot.benchpkgplot","text":"benchpkgplot package_name [-r --rev ] [-i --input-dir ]\n [-o --output-dir ] [-n --npart ]\n [--format ]\n\nPlot the benchmarks of a package as created with benchpkg.\n\nArguments\n\npackage_name: Name of the package.\n\nOptions\n\n-r, --rev : Revisions to test (delimit by comma).\n-i, --input-dir : Where the JSON results were saved (default: \".\").\n-o, --output-dir : Where to save the plots results (default: \".\").\n-n, --npart : Max number of plots per page (default: 10).\n--format : File type to save the plots as (default: \"png\").\n\n\n\n\n\n","category":"function"},{"location":"api/","page":"API","title":"API","text":"load_results(specs::Vector{PackageSpec}; input_dir::String=\".\")","category":"page"},{"location":"api/#AirspeedVelocity.Utils.load_results-Tuple{Vector{Pkg.Types.PackageSpec}}","page":"API","title":"AirspeedVelocity.Utils.load_results","text":"load_results(specs::Vector{PackageSpec}; input_dir::String=\".\")\n\nLoad the results from JSON files for each PackageSpec in the specs vector. The function assumes that the JSON files are located in the input_dir directory and are named as \"results_{s}.json\" where s is equal to PackageName@Rev.\n\nThe function returns a combined OrderedDict, to be input to the combined_plots function.\n\nArguments\n\nspecs::Vector{PackageSpec}: Vector of each package revision to be loaded (as PackageSpec).\ninput_dir::String=\".\": Directory where the results. Default is current directory.\n\nReturns\n\nOrderedDict{String,OrderedDict}: Combined results ready to be passed to the combined_plots function.\n\n\n\n\n\n","category":"method"},{"location":"api/","page":"API","title":"API","text":"combined_plots(combined_results::OrderedDict; npart=10)","category":"page"},{"location":"api/#AirspeedVelocity.PlotUtils.combined_plots-Tuple{OrderedDict}","page":"API","title":"AirspeedVelocity.PlotUtils.combined_plots","text":"combined_plots(combined_results::OrderedDict; npart=10)\n\nCreate a combined plot of the results loaded from the load_results function. The function partitions the plots into smaller groups of size npart (defaults to 10) and combines the plots in each group vertically. It returns an array of combined plots.\n\nArguments\n\ncombined_results::OrderedDict: Data to be plotted, obtained from the load_results function.\nnpart::Int=10: Max plots to be combined in a single vertical group. Default is 10.\n\nReturns\n\nArray{Plotly.Plot,1}: An array of combined Plots objects, with each element representing a group of up to npart vertical plots.\n\n\n\n\n\n","category":"method"},{"location":"api/","page":"API","title":"API","text":"create_table(combined_results::OrderedDict; kws...)","category":"page"},{"location":"api/#AirspeedVelocity.TableUtils.create_table-Tuple{OrderedDict}","page":"API","title":"AirspeedVelocity.TableUtils.create_table","text":"create_table(combined_results::OrderedDict; kws...)\n\nCreate a markdown table of the results loaded from the load_results function. If there are two results for a given benchmark, will have an additional column for the comparison, assuming the first revision is one to compare against.\n\nThe formatter keyword argument generates the column value. It defaults to TableUtils.format_time, which prints the median time ± the interquantile range. TableUtils.format_memory is also available to print the number of allocations and the allocated memory.\n\n\n\n\n\n","category":"method"},{"location":"","page":"Home","title":"Home","text":"CurrentModule = AirspeedVelocity","category":"page"},{"location":"#AirspeedVelocity.jl","page":"Home","title":"AirspeedVelocity.jl","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"(Image: Stable) (Image: Dev) (Image: Build Status) (Image: Coverage)","category":"page"},{"location":"","page":"Home","title":"Home","text":"AirspeedVelocity.jl strives to make it easy to benchmark Julia packages over their lifetime. It is inspired by asv.","category":"page"},{"location":"","page":"Home","title":"Home","text":"This package allows you to:","category":"page"},{"location":"","page":"Home","title":"Home","text":"Generate benchmarks directly from the terminal with an easy-to-use CLI.\nCompare many commits/tags/branches at once.\nPlot those benchmarks, automatically flattening your benchmark suite into a list of plots with generated titles.\nRun as a GitHub action to create benchmark comparisons for every submitted PR (in a bot comment).","category":"page"},{"location":"","page":"Home","title":"Home","text":"This package also freezes the benchmark script at a particular revision, so there is no worry about the old history overwriting the benchmark.","category":"page"},{"location":"#Installation","page":"Home","title":"Installation","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"You can install the CLI with:","category":"page"},{"location":"","page":"Home","title":"Home","text":"julia -e 'using Pkg; Pkg.add(\"AirspeedVelocity\"); Pkg.build(\"AirspeedVelocity\")'","category":"page"},{"location":"","page":"Home","title":"Home","text":"This will install two executables at ~/.julia/bin - make sure to have it on your PATH.","category":"page"},{"location":"#Examples","page":"Home","title":"Examples","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"You may use the CLI to generate benchmarks for any package with, e.g.,","category":"page"},{"location":"","page":"Home","title":"Home","text":"benchpkg","category":"page"},{"location":"","page":"Home","title":"Home","text":"This will benchmark the package defined in the current directory at the current dirty state, against the default branch (i.e., main or master), over all benchmarks defined in benchmark/benchmarks.jl. It will then print a markdown table of the results while also saving the JSON results to the current directory.","category":"page"},{"location":"","page":"Home","title":"Home","text":"You can configure all options with the CLI flags. For example, to benchmark the registered package Transducers.jl at the revisions v0.4.20, v0.4.70, and master, you can use:","category":"page"},{"location":"","page":"Home","title":"Home","text":"benchpkg Transducers \\\n --rev=v0.4.20,v0.4.70,master \\\n --bench-on=v0.4.20","category":"page"},{"location":"","page":"Home","title":"Home","text":"This will further use the benchmark script benchmark/benchmarks.jl as it was defined at v0.4.20, and then save the JSON results in the current directory.","category":"page"},{"location":"","page":"Home","title":"Home","text":"We can explicitly view the results of the benchmark as a table with benchpkgtable:","category":"page"},{"location":"","page":"Home","title":"Home","text":"benchpkgtable Transducers \\\n --rev=v0.4.20,v0.4.70,master","category":"page"},{"location":"","page":"Home","title":"Home","text":"We can also generate plots of the revisions with:","category":"page"},{"location":"","page":"Home","title":"Home","text":"benchpkgplot Transducers \\\n --rev=v0.4.20,v0.4.70,master \\\n --format=pdf \\\n --npart=5","category":"page"},{"location":"","page":"Home","title":"Home","text":"which will generate a pdf file for each set of 5 plots, showing the change with each revision:","category":"page"},{"location":"","page":"Home","title":"Home","text":"\"Screenshot","category":"page"},{"location":"","page":"Home","title":"Home","text":"You can also provide a custom benchmark. For example, let's say you have a file script.jl, defining a benchmark for SymbolicRegression.jl (we always need to define the SUITE variable as a BenchmarkGroup):","category":"page"},{"location":"","page":"Home","title":"Home","text":"using BenchmarkTools, SymbolicRegression\nconst SUITE = BenchmarkGroup()\n\n# Create hierarchy of benchmarks:\nSUITE[\"eval_tree_array\"] = BenchmarkGroup()\n\noptions = Options(; binary_operators=[+, -, *], unary_operators=[cos])\ntree = Node(; feature=1) + cos(3.2f0 * Node(; feature=2))\n\n\nfor n in [10, 20]\n SUITE[\"eval_tree_array\"][n] = @benchmarkable(\n eval_tree_array($tree, X, $options),\n evals=10,\n samples=1000,\n setup=(X=randn(Float32, 2, $n))\n )\nend\n","category":"page"},{"location":"","page":"Home","title":"Home","text":"Inside this script, we will also have access to the PACKAGE_VERSION constant, to allow for different behavior depending on tag. We can run this benchmark over the history of SymbolicRegression.jl with:","category":"page"},{"location":"","page":"Home","title":"Home","text":"benchpkg SymbolicRegression \\\n -r v0.15.3,v0.16.2 \\\n -s script.jl \\\n -o results/ \\\n --exeflags=\"--threads=4 -O3\"","category":"page"},{"location":"","page":"Home","title":"Home","text":"where we have also specified the output directory and extra flags to pass to the julia executable. We can also now visualize this:","category":"page"},{"location":"","page":"Home","title":"Home","text":"benchpkgplot SymbolicRegression \\\n -r v0.15.3,v0.16.2 \\\n -i results/ \\\n -o plots/","category":"page"},{"location":"#Using-in-CI","page":"Home","title":"Using in CI","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"You can use this package in GitHub actions to benchmark every PR submitted to your package, by copying the example: .github/workflows/benchmark_pr.yml.","category":"page"},{"location":"","page":"Home","title":"Home","text":"Every time a PR is submitted to your package, this workflow will run and generate plots of the performance of the PR against the default branch, as well as a markdown table, showing whether the PR improves or worsens performance:","category":"page"},{"location":"","page":"Home","title":"Home","text":"(Image: regression_example)","category":"page"},{"location":"#Usage","page":"Home","title":"Usage","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"For running benchmarks, you can use the benchpkg command, which is built into the ~/.julia/bin folder:","category":"page"},{"location":"","page":"Home","title":"Home","text":" benchpkg [package_name] [-r --rev ]\n [--url ]\n [--path ]\n [-o, --output-dir ]\n [-e, --exeflags ]\n [-a, --add ]\n [-s, --script ]\n [--bench-on ]\n [-f, --filter ]\n [--nsamples-load-time ]\n [--tune]\n [--dont-print]\n\nBenchmark a package over a set of revisions.\n\n# Arguments\n\n- `package_name`: Name of the package. If not given, the package is assumed to be\n the current directory.\n\n# Options\n\n- `-r, --rev `: Revisions to test (delimit by comma). Use `dirty` to\n benchmark the current state of the package at `path` (and not a git commit).\n The default is `{DEFAULT},dirty`, which will attempt to find the default branch\n of the package.\n- `--url `: URL of the package.\n- `--path `: Path of the package. The default is `.` if other arguments are not given.\n- `-o, --output-dir `: Where to save the JSON results. The default is `.`.\n- `-e, --exeflags `: CLI flags for Julia (default: none).\n- `-a, --add `: Extra packages needed (delimit by comma).\n- `-s, --script `: The benchmark script. Default: `benchmark/benchmarks.jl` downloaded from `stable`.\n- `--bench-on `: If the script is not set, this specifies the revision at which\n to download `benchmark/benchmarks.jl` from the package.\n- `-f, --filter `: Filter the benchmarks to run (delimit by comma).\n- `--nsamples-load-time `: Number of samples to take when measuring load time of\n the package (default: 5). (This means starting a Julia process for each sample.)\n- `--dont-print`: Don't print the table.\n\n# Flags\n\n- `--tune`: Whether to run benchmarks with tuning (default: false).","category":"page"},{"location":"","page":"Home","title":"Home","text":"You can also just generate a table:","category":"page"},{"location":"","page":"Home","title":"Home","text":" benchpkgtable [package_name] [-r --rev ]\n [-i --input-dir ]\n [--ratio]\n [--mode ]\n [--url ]\n [--path ]\n\nPrint a table of the benchmarks of a package as created with `benchpkg`.\n\n# Arguments\n\n- `package_name`: Name of the package.\n\n# Options\n\n- `-r, --rev `: Revisions to test (delimit by comma).\n The default is `{DEFAULT},dirty`, which will attempt to find the default branch\n of the package.\n- `-i, --input-dir `: Where the JSON results were saved (default: \".\").\n- `--url `: URL of the package. Only used to get the package name.\n- `--path `: Path of the package. The default is `.` if other arguments are not given.\n Only used to get the package name.\n\n# Flags\n\n- `--ratio`: Whether to include the ratio (default: false). Only applies when\n comparing two revisions.\n- `--mode`: Table mode(s). Valid values are \"time\" (default), to print the\n benchmark time, or \"memory\", to print the allocation and memory usage.\n Both options can be passed, if delimited by comma.","category":"page"},{"location":"","page":"Home","title":"Home","text":"For plotting, you can use the benchpkgplot function:","category":"page"},{"location":"","page":"Home","title":"Home","text":" benchpkgplot package_name [-r --rev ]\n [-i --input-dir ]\n [-o --output-dir ]\n [-n --npart ]\n [--format ]\n\nPlot the benchmarks of a package as created with `benchpkg`.\n\n# Arguments\n\n- `package_name`: Name of the package.\n\n# Options\n\n- `-r, --rev `: Revisions to test (delimit by comma).\n- `-i, --input-dir `: Where the JSON results were saved (default: \".\").\n- `-o, --output-dir `: Where to save the plots results (default: \".\").\n- `-n, --npart `: Max number of plots per page (default: 10).\n- `--format `: File type to save the plots as (default: \"png\").","category":"page"},{"location":"","page":"Home","title":"Home","text":"If you prefer to use the Julia API, you can use the benchmark function for generating data. The API is given here.","category":"page"},{"location":"#Related-packages","page":"Home","title":"Related packages","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"Also be sure to check out PkgBenchmark.jl. PkgBenchmark.jl is a simple wrapper of BenchmarkTools.jl to interface it with Git, and is a good choice for building custom analysis workflows.","category":"page"},{"location":"","page":"Home","title":"Home","text":"However, for me this wrapper is a bit too thin, which is why I created this package. AirspeedVelocity.jl tries to have more features and workflows readily-available. It also emphasizes a CLI (though there is a Julia API), as my subjective view is that this is more suitable for interacting side-by-side with git.","category":"page"},{"location":"","page":"Home","title":"Home","text":"Pages = [\"api.md\"]","category":"page"},{"location":"","page":"Home","title":"Home","text":"Modules = [AirspeedVelocity]","category":"page"}] }