The C++ counterpart's purpose is to demonstrate use of CUPTI
metrics, but these are not supported in Python bindings, so
this example is a duplicate of example/throughput.py
Implements `cuda.bench.results.BenchmarkResult` class to represent data from JSON output of benchmark execution.
The contains implements two class methods `BenchmarkResult.from_json(filename : str | os.PathLike, *, metadata : Any = None)` which expects well-formed JSON filename and `BenchmarkResult.empty(*, metadata : Any = None)` intended to represent failed result with reasons that can be recorded in metadata at user's discretion.
The `BenchmarkResult` implements mapping interface, supporting `.keys()`, `.values()`, `.items()` methods, `__len__`, `__contains__`, `__getitem__` and `__iter__` special methods.
Values in `BenchmarkResult` has type `cuda.bench.results.SubBenchmarkResult` which implements a list-like interface, i.e. implements `__len__`, `__getitem__`, and `__iter__` special methods. Values in this list-like structure correspond to measurements of individual states of a particular benchmark (the key in `BenchmarkResult`).
Elements of `SubBenchmarkResult` structure have type `SubBenchmarkState` that supports mapping protocol with axis_values as a key and represent data corresponding to measurements for a particular state (combination of settings for each axis).
The state provides `.samples` and `.frequencies` attributes storing raw execution duration values and estimates for average GPU frequencies.
Example usage:
```
import array, numpy as np, cuda.bench.results
r = cuda.bench.results.BenchmarkResult("perf_data/axes_run1.json")
r["copy_sweep_grid_shape"].centers_with_frequencies(
lambda t, f: np.median(np.asarray(t)*np.asarray(f)))
```
```
In [1]: import array, numpy as np, cuda.bench.results
In [2]: r = cuda.bench.results.BenchmarkResult("temp_data/axes_run1.json")
In [3]: list(r)
Out[3]:
['simple',
'single_float64_axis',
'copy_sweep_grid_shape',
'copy_type_sweep',
'copy_type_conversion_sweep',
'copy_type_and_block_size_sweep']
In [4]: r["simple"].centers(lambda t: np.percentile(t, [25,75]))
Out[4]: {'Device=0': array([0.00100966, 0.00101299])}
In [5]: r.centers(lambda t: np.percentile(t, [25,75]))["simple"]
Out[5]: {'Device=0': array([0.00100966, 0.00101299])}
In [6]: len(r)
Out[6]: 6
In [7]: "fake" in r
Out[7]: False
```
Each `SubBenchmarkState` implements
`.summaries` attribute - rich object that retains tag/name/hint/hide/description metadata.
* Add nvbench-json-summary to render NVBench JSON output as an NVBench-style
markdown summary table, including axis formatting, device sections, hidden
summary filtering, and summary hint formatting.
Update packaging, type stubs, and tests for the new namespace, renamed
classes, Python 3.10-compatible annotations, and summary-table generation.
* Split tests in test_benchmark_result into smaller tests
* Fix break due to file name change
* Add python/examples/benchmark_result_autotune.py
This example demonstrates using cuda.bench and cuda.bench.results
to implement simple auto-tuning, demonstrated on selecting of
tile shape hyperparameter for naive stencil kernel implemented
in numba-cuda.
* Resolve ruff PLE0604
* Fix for format_axis_value in json format script to handle None value
Add tests to cover such input.
* Address code rabbit review feedback
* Fix license header, add validation
* Addressed both issues raised in review
Malformed values are now represented in result as None.
Skipped benchmarks are no longer dropped, i.e., they are present
in BenchmarkResult data, but they are not reflected in summary
table in line with what NVBench-instrumented benchmarks do.
* Correct Python API signature of State.get_axis_values_as_strings
The C++ API has default boolean argument color, but Python API
declared no arguments.
Closes#345
* Also exercise invocation of get_axis_values_as_string with keyword argument value
* Remove use of cuda.core.experimental
Fixed relative text alignment in docstrings to fix autodoc warnigns
Renamed cuda.bench.test_cpp_exception and cuda.bench.test_py_exception functions
to start with underscore, signaling that these functions are internal and should
not be documented
Account for test_cpp_exceptions -> _test_cpp_exception, same for *_py_*
Make sure to reset __module__ of reexported symbols to be cuda.bench
* Introduce function colorize to modularize colorization/no-color handling
* Use sns.set_theme instead of deprecated sns.set()
* Use str.format instead of legacy % syntax
* Simplified iteration over list
Use f-string (supported since Python 3.6) instead of str.format for
better readability and performance
Fix GCC16 sfinae incomplete warnings.
GCC16 started requiring that the type `T` used in `std::reference_wrapper<T>` is complete where using `-std=c++17`. Since NVBench has to forward declare some types in header files to break circular dependency, use of incomplete type breaks build due to use of `-Werror` flag due to `-Wsfinae-incomplete` warning emitted by GCC16.
This commit replaced affected uses of `std::reference_wrapper<const nvbench::benchmark_base>` in state.cxx, and `std::reference_wrapper<nvbench::printer_base>` in benchmark_base.cxx with raw pointers.
* Add cuda architectures to build wheel for
* Package scripts in wheel
* Separate cuda major version extraction to fix architecutre selection logic
* Add back statement printing cuda version
* [pre-commit.ci] auto code formatting
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
This loop uses benchmark.run_or_skip to resolve#284 even
for scripts that contain more than one benchmark, or when
a script with a single benchmark is executed when more than
one device is available.