* Implemented batchnorm-backward Blockwise and Multiblock kernels
* Add batchnorm-backward device op
* Add batchnorm-backward host-reference op
* Add batchnorm-backward example
* Parameters renaming in batchnorm backward kernels and device op
* Change in the example to loose the threshold for ScaleDiff checking
* Add comments to explain the implementation of batchnorm-backward
* Parameters renaming again in batchnorm backward kernels
* Improve the expression calculation for performance
* Add batchnorm backward to README
* Add comments to explain inv-variance in batchnorm forward and backward
* Renaming the batchnorm forward training and inferring examples
* Add/update the comments for batchnorm-backward kernels
* Renaming again
* Add block_sync_lds between two consecutive blockwise reductions
* Move common expression 1/N out of the static_for loops
* Add dy_elementwise_op
* Renaming in backward example again
* Add checking for reduceDims in reference_batchnorm_backward
* Update to comments and codes format
* Rename in the comments
* Remove common expression out of the loop in reference_batchnorm_backward_nhwc_c
* Add block_sync_lds() between blockwise reduction again
* Fix comments again
* Remove int8 from batchnorm-forward instances since it is not needed for forward training and could fail test
[ROCm/composable_kernel commit: 44789d992a]
Composable Kernel
Methodology
Composable Kernel (CK) library aims to provide a programming model for writing performance critical kernels for machine learning workloads across multiple architectures including GPUs, CPUs, etc, through general purpose kernel languages, like HIP C++.
CK utilizes two concepts to achieve performance portability and code maintainability:
- A tile-based programming model
- Algorithm complexity reduction for complex ML operators, using innovative technique we call "Tensor Coordinate Transformation".
Code Structure
Current CK library are structured into 4 layers:
- "Templated Tile Operators" layer
- "Templated Kernel and Invoker" layer
- "Instantiated Kernel and Invoker" layer
- "Client API" layer
Contributors
The list of developers and contributors is here: Contributors
Citation
If you use CK, please use following citations:
- CK paper will be freely available on arXiv soon: Realizing Tensor Operators Using Coordinate Transformations and Tile Based Programming
- CITATION.cff
License
CK is released under the MIT license. License File
Build CK
Build docker image
DOCKER_BUILDKIT=1 docker build -t ck:latest -f Dockerfile .
Launch docker
docker run \
-it \
--privileged \
--group-add sudo \
-w /root/workspace \
-v ${PATH_TO_LOCAL_WORKSPACE}:/root/workspace \
ck:latest \
/bin/bash
Build CK
mkdir build && cd build
# Need to specify target ID, example below is for gfx908 and gfx90a
cmake \
-D CMAKE_PREFIX_PATH=/opt/rocm \
-D CMAKE_CXX_COMPILER=/opt/rocm/bin/hipcc \
-D CMAKE_CXX_FLAGS="-O3" \
-D CMAKE_BUILD_TYPE=Release \
-D GPU_TARGETS="gfx908;gfx90a" \
..
Build examples and tests
make -j examples tests
make test
Instructions for running each individual examples are under example
Build ckProfiler
make -j ckProfiler
Instructions for running ckProfiler are under profiler
Install CK
make install
Using CK as pre-built kernel library
Instructions for using CK as a pre-built kernel library are under client_example
Caveat
Kernel Timing and Verification
CK's own kernel timer will warn up kernel once, and then run it multiple times to get average kernel time. For some kernels that use atomic add, this will cause output buffer to be accumulated multiple times, causing verification failure. To work around it, do not use CK's own timer and do verification at the same time. CK's own timer and verification in each example and ckProfiler can be enabled or disabled from command line.

