* Add check for zero values
* Add static assertions
* Remove invalid option '-e' in smoke_test.sh
* Use correct path of smoke_test.sh
* Avoid zero-sized shared memory array
* Add warning comment
* Replace expr by integer_divide_ceil() call
* Use more readable constant names
* Write down assumption as static assertion
* Add more diagnostic error messages
* Fix wrong BlockWarps when using default pipeline policy
* Add more static assertions for A LDS desc
* Allow using vector size < 8 for data type fp16/bf16
* Align vector size between DRAM dist & LDS desc
* Remove no-longer used func decl
* Fix wrong displayed piepline name
* Undo policy template changes for tile_example_gemm_basic
* Add missing space and make error message stands out
* Unify print precision
* Add missing include directive <iomanip>
* Replace constant 64 by get_warp_size() call
* Replace constant 128 by named variable: BankLength
* Add kAMBlock/kBNBlock attributes
* Allow usig different A/B warp dist for multiple blocks
* Add helper function to get warp dist encodings
* Add 4x64x4 fp16 warp gemm attribute impl
* Complete the A/B warp dist encoding logic
* Fix wrong thread mapping for C matrix
* Use smaller vector size for small tile
* Add static assert to block unsupported warp gemm impl
* Extract common code out as helper method
* Add 4x64x16 fp16 warp gemm type alias
* Add comment to warning developers
* Undo WarpGemmAtrributeMfma<> changes
* Use more clear static assertion error message
* Add trivial wrapper to get warp dstr encodings
* Only transpose warp gemm result if it's square
* Fix compilation error
* Support multi-block warp gemm (on N direction)
* Remove duplicated code
* Fix output encoding of warp gemm
* Fix wrong shape of WarpGemmAtrributeMfmaIterateK<>
* Remove unused code
* Fix wrong shape of WarpGemmAttributeMfmaImplF16F16F32M4N64K4
* Add type config for bf16_t
* Add 4x64x16 bf16 warp gemm
* Update WarpGemmAtrributeMfmaIterateKAndTransposedCDistribution
* Add 64x4x4 fp16/bf16 warp gemm impl
* Add 64x4x16 fp16/bf16 warp gemm
* Add static assertion for better error diagnostic
* Get Q dram dstr directly form block gemm
* Add missing header: fused_moe.hpp
* Allow specifying different warp-gemm for gemm0 & gemm1
* Store P matrix into LDS before gemm1
* Fix inconsistant kernel name
* Remove constraint on gemm0 & gemm1 block warps
* Remove unsupported vector size from checking list
* Allow using 4x64x16 warp gemm for gemm0
* Finish policy customization
* Finish pipeline modification
F#
* Use block warps in codegen
* Fix wrong rank of m_lds_window origin
* Use better distributed tensor
* Make P-store earlier
* Remove duplicated experssions
* Remove unnecessary tile window
* Create new files for new splitkv pipeline
* Separate old/new pipeline codegen logic
* Sync changes form develop
* Undo gemm kernel/pipeline changes
* Undo gemm example changes
* Remove blank lines
* Fix typo
* Use new warp gemm interface
* Fix link error
* Fix wrong pipeline tag
* Fix more link error
* Avoid unnecessary padding
* Always use vector load for K
* Padding on fastest dimension when necessary
* Force padding Q on hdim_q
* Set high dimension padding flag to false
* Re-format headers
* Use warps=<1, 4, 1> for both gemm0 & gemm1
* Fix complilation errors
* Remove m/l shuffle logics
* Ignore duplicate data when write lse_acc
* Use gemm0 block warps as lds tile width
* Remove hard-coded numbers
* Fix wrong distribution width
* Remove unnecessary code
* Add s_barrier before writing to LDS
* Store Q into LDS before gemm0
* Fix wrong Q tile size
* Use simple Q lds descriptor for debuging
* Use more realistic Q lds descriptor
* Add comment & use better variable name
* Make Q lds space not overlapped with others
* Remove unnecessary block_tile_reduce_sync() call
* Move Q load statements
* Move block_sync_lds() right before use
* Re-order instructions
* Remove necessary lambda expression
* Use 8 threads on kMaxSplits direction while doing reduction
* Tiny correction for using 8 threads on kMaxSplits direction for combine kernel
* Padding num_split direction of o_acc tile window to 4x
* Update splitkv combine pipeline design
* Add kN1 back to splitkv combine pipeline problem
* Fix compilation errors
* Add missing template parameter
* Fix wrong splitkv combine kernel name
* Fix wrong origin
* Fix wrong LDS descriptor shape
* Fix sync & reduction logics
* Remove unnecessary static assertions
* Extract tile size computation logics
* Make sure we can reuse padding flags in combine kernels
* Rename variables
* Use OaccDataType in BlockFmhaSplitKVCombinePipelineTileSizes<>
* Remove unnecessary static assertion
* Fix function name typo
* Add constraint on kN1 template parameter
* Hide K tile loading latency in earlier iteration
* Fix wrong splitkv kernel name
* Use s_shuffling to replace p_shuffling which removes the needs of cross-warp reduction
* Rename pipeline
* Fix wrong pipeline name attribute
* Add GetAlignmentQ() for NWarpSShuffle pipeline
* Separate Q tile into dram tile & register tile concepts
* Remove non-squre warp gemm transpose c type alias
* Fallback tile size changes for fmha fwd splitkv
* Remove redundant change
* Refine naming for the S tile
* Use better naming of the S tile dstr (read from lds)
* Share Q lds with K lds
* Tiny change
* Fix with using static_for for passing CI checking
---------
Co-authored-by: Qianfeng Zhang <Qianfeng.Zhang@amd.com>
[ROCm/composable_kernel commit: 37cdbf4f0e]
Composable Kernel
Note
The published documentation is available at Composable Kernel in an organized, easy-to-read format, with search and a table of contents. The documentation source files reside in the
docsfolder of this repository. As with all ROCm projects, the documentation is open source. For more information on contributing to the documentation, see Contribute to ROCm documentation.
The Composable Kernel (CK) library provides a programming model for writing performance-critical kernels for machine learning workloads across multiple architectures (GPUs, CPUs, etc.). The CK library uses general purpose kernel languages, such as HIP C++.
CK uses two concepts to achieve performance portability and code maintainability:
- A tile-based programming model
- Algorithm complexity reduction for complex machine learning (ML) operators. This uses an innovative technique called Tensor Coordinate Transformation.
The current CK library is structured into four layers:
- Templated Tile Operators
- Templated Kernel and Invoker
- Instantiated Kernel and Invoker
- Client API
General information
- CK supported operations
- CK Tile supported operations
- CK wrapper
- CK codegen
- CK profiler
- Examples (Custom use of CK supported operations)
- Client examples (Use of CK supported operations with instance factory)
- Terminology
- Contributors
CK is released under the MIT license.
Building CK
We recommend building CK inside Docker containers, which include all necessary packages. Pre-built Docker images are available on DockerHub.
-
To build a new Docker image, use the Dockerfile provided with the source code:
DOCKER_BUILDKIT=1 docker build -t ck:latest -f Dockerfile . -
Launch the Docker container:
docker run \ -it \ --privileged \ --group-add sudo \ -w /root/workspace \ -v ${PATH_TO_LOCAL_WORKSPACE}:/root/workspace \ ck:latest \ /bin/bash -
Clone CK source code from the GitHub repository and start the build:
git clone https://github.com/ROCm/composable_kernel.git && \ cd composable_kernel && \ mkdir build && \ cd buildYou must set the
GPU_TARGETSmacro to specify the GPU target architecture(s) you want to run CK on. You can specify single or multiple architectures. If you specify multiple architectures, use a semicolon between each; for example,gfx908;gfx90a;gfx940.cmake \ -D CMAKE_PREFIX_PATH=/opt/rocm \ -D CMAKE_CXX_COMPILER=/opt/rocm/bin/hipcc \ -D CMAKE_BUILD_TYPE=Release \ -D GPU_TARGETS="gfx908;gfx90a" \ ..If you don't set
GPU_TARGETSon the cmake command line, CK is built for all GPU targets supported by the current compiler (this may take a long time). Tests and examples will only get built if the GPU_TARGETS is set by the user on the cmake command line.NOTE: If you try setting
GPU_TARGETSto a list of architectures, the build will only work if the architectures are similar, e.g.,gfx908;gfx90a, orgfx1100;gfx1101;gfx11012. Otherwise, if you want to build the library for a list of different architectures, you should use theGPU_ARCHSbuild argument, for exampleGPU_ARCHS=gfx908;gfx1030;gfx1100;gfx942. -
Build the entire CK library:
make -j -
Install CK:
make -j install
Optional post-install steps
-
Build examples and tests:
make -j examples tests -
Build and run all examples and tests:
make -j checkYou can find instructions for running each individual example in example.
-
Build ckProfiler:
make -j ckProfilerYou can find instructions for running ckProfiler in profiler.
-
Build our documentation locally:
cd docs pip3 install -r sphinx/requirements.txt python3 -m sphinx -T -E -b html -d _build/doctrees -D language=en . _build/html
Note the -j option for building with multiple threads in parallel, which speeds up the build significantly.
However, -j launches unlimited number of threads, which can cause the build to run out of memory and
crash. On average, you should expect each thread to use ~2Gb of RAM.
Depending on the number of CPU cores and the amount of RAM on your system, you may want to
limit the number of threads. For example, if you have a 128-core CPU and 128 Gb of RAM it's advisable to use -j32.
Additional cmake flags can be used to significantly speed-up the build:
-
DTYPES(default is not set) can be set to any subset of "fp64;fp32;fp16;fp8;bf16;int8" to build instances of select data types only. The main default data types are fp32 and fp16; you can safely skip other data types. -
DL_KERNELS(default is OFF) must be set to ON in order to build instances, such asgemm_dlorbatched_gemm_multi_d_dl. These instances are useful on architectures like the NAVI2x, as most other platforms have faster instances, such asxdlorwmma, available. -
CK_USE_FP8_ON_UNSUPPORTED_ARCH(default is OFF) must be set to ON in order to build instances, such asgemm_universal,gemm_universal_streamkandgemm_multiply_multiplyfor fp8 data type for GPU targets which do not have native support for fp8 data type, such as gfx908 or gfx90a. These instances are useful on architectures like the MI100/MI200 for the functional support only.
Using sccache for building
The default CK Docker images come with a pre-installed version of sccache, which supports clang being used as hip-compiler (" -x hip"). Using sccache can help reduce the time to re-build code from hours to 1-2 minutes. In order to invoke sccache, you need to run:
sccache --start-server
then add the following flags to the cmake command line:
-DCMAKE_CXX_COMPILER_LAUNCHER=sccache -DCMAKE_C_COMPILER_LAUNCHER=sccache
You may need to clean up the build folder and repeat the cmake and make steps in order to take advantage of the sccache during subsequent builds.
Using CK as pre-built kernel library
You can find instructions for using CK as a pre-built kernel library in client_example.
Contributing to CK
When you contribute to CK, make sure you run clang-format on all changed files. We highly
recommend using git hooks that are managed by the pre-commit framework. To install hooks, run:
sudo script/install_precommit.sh
With this approach, pre-commit adds the appropriate hooks to your local repository and
automatically runs clang-format (and possibly additional checks) before any commit is created.
If you need to uninstall hooks from the repository, you can do so by running the following command:
script/uninstall_precommit.sh
If you need to temporarily disable pre-commit hooks, you can add the --no-verify option to the
git commit command.

