mirror of
https://github.com/ROCm/composable_kernel.git
synced 2026-05-03 21:21:22 +00:00
* feat: test setup for batched contraction (aka batched gemm multiple d e permute) * wip: device struct for WMMA batched contraction multiple d based on new gridwise op * feat: working batched contraction on RDNA, non-naive tensor descriptors for gridwise_gemm_wmma_cshuffle_v3, test setup for odd cases * fix: failure to resolve template parameters when calling new function overload * fix: passing reference type as parameter instead of underlying types * fix: merge error caused duplicate definitions * fix: make sure constness of template and parameters types match * fix: don't compile batched contraction test on unsupported architectures * feat: add example for new wmma implementation, and consolidate example code between platforms * style: return inline instead of with branch * chore: add extra assert on vector memory access sizes * chore: clean up some unused variables * fix: correct tail number calculation, added small cases and extra instances to the test * fix: properly support wave transfer by generating correct grid descriptors dependent on the transfer method
GEMM with Bias, Elementwise, and Permute Fusion
Theory
This example demonstrates GEMM fused with bias addition, elementwise operation, and permutation. This pattern is used in transformer models and other neural architectures where a linear transformation is followed by bias, activation, and layout transformation.
Mathematical Formulation:
- GEMM:
Y = A \times B - Bias:
Z = Y + \text{bias} - Elementwise:
E = f(Z)(e.g., activation) - Permute:
O = \text{permute}(E, \text{axes})
Algorithmic Background:
- The GEMM result is kept in registers, bias and elementwise ops are fused in the epilogue, and permutation is applied before writing to global memory.
- Permutation changes the layout/order of tensor axes (e.g., NCHW to NHWC).
- This fusion reduces memory traffic and is common in transformer and CNN pipelines.
How to Run
Prerequisites
Please follow the instructions in the main Build Guide section as a prerequisite to building and running this example.
Build and run
cd composable_kernel/example/25_gemm_bias_e_permute
mkdir build && cd build
cmake -DCMAKE_CXX_COMPILER=/opt/rocm/bin/hipcc ..
make -j
# Example run
./gemm_bias_e_permute_xdl --verify=1 --time=1
Source Code Structure
Directory Layout
example/25_gemm_bias_e_permute/
├── gemm_bias_e_permute_xdl.cpp # Main example: sets up, runs, and verifies GEMM+Bias+Elementwise+Permute
include/ck/tensor_operation/gpu/device/
│ └── device_gemm_bias_e_permute.hpp # Device-level API for fused GEMM
include/ck/tensor_operation/gpu/device/impl/
│ └── device_gemm_bias_e_permute_impl.hpp # Implementation
include/ck/tensor_operation/gpu/grid/
└── gridwise_gemm_bias_e_permute.hpp # Grid-level kernel
Key Classes and Functions
- DeviceGemmBiasEPermute (in
device_gemm_bias_e_permute.hpp):
Device API for GEMM fused with bias, elementwise, and permutation. - gridwise_gemm_bias_e_permute (in
gridwise_gemm_bias_e_permute.hpp):
Implements the tiled/blocking GEMM kernel with fused epilogue and permutation.
This example demonstrates how Composable Kernel supports efficient fusion of linear, bias, activation, and layout operations for deep learning models.