mirror of
https://github.com/ROCm/composable_kernel.git
synced 2026-05-12 17:26:00 +00:00
[CK] [CK_Tile] Add GroupConv to Kernel Dispatcher ## Motivation This PR adds CK Tile group convolution (forward, backward-data, backward-weight) support to the kernel dispatcher, matching and unifying with the existing dispatcher GEMM infrastructure in architecture and usability. The dispatcher provides a unified kernel dispatch system with both C++ and Python frontends, and until now only supported GEMM operations. This PR enables framework integrators to use the same declarative kernel workflow for convolutions as they do for GEMM: declare kernels, build a registry JIT, select kernels within the registry at runtime, and dispatch to GPU. Future PRs will include runtime kernel selection heuristics for autotuning of kernel parameters based on (problem, hardware arch). ## Technical Details Grouped convolution support has been added to the CK Tile Dispatcher with generated_conv_backend.hpp enabling dispatcher.run(in, wei, out, problem) for all 6 conv variants (fwd/bwdd/bwdw x 2D/3D), runtime heuristic kernel selection, and GroupedConvKernelKey with full ConvConfigBase fields. Python side adds parallel JIT via registry.build(max_workers) and heuristic registry.select(). Includes 7 C++ and 6 Python examples covering all directions with CPU reference validation, and shared infrastructure improvements (BaseRegistry CRTP, structured exceptions). As a sanity check, JIT compile times for a single kernel remains the same and for multiple kernels there is better parallelism: Kernels | 1 worker | 8 workers 1 | 7.7 s | 7.7 s 2 | 15.9 s | 8.2 s 4 | 33.4 s | 9.7 s 6 | 52.3 s | 10.2 s ## Test Plan 145 ephemeral unit tests have been added to test basic functionality. All 30 examples/integration tests run end-to-end on gfx950 (MI350): 7 C++ conv, 7 C++ GEMM, 6 Python conv, 10 Python GEMM. CPU reference validation for forward, backward-data, and backward-weight (2D) in both C++ and Python examples pass. ## Test Result 30 examples pass. Peak performance: 132 TFLOPS (Batch-32 forward 56x56), 53 TFLOPS (pointwise 1x1). CPU reference accuracy: max_abs_diff < 0.002 for all directions (fp16 vs fp32 reference). ## Submission Checklist - [x] Look over the contributing guidelines at https://github.com/ROCm/ROCm/blob/develop/CONTRIBUTING.md#pull-requests.
3.8 KiB
3.8 KiB
CK Tile Unified Code Generators
Single source of truth for GEMM and Grouped Convolution kernel generation.
See also: Main Dispatcher README for installation and core concepts.
Shared Infrastructure
Both GEMM and Grouped Conv generators share common code via codegen_common.py:
TileConfig- Dataclass for tile dimensionsTraitConfigBase- Base for kernel trait configurations with arch-aware validationCommonTypeMappings- Dtype-to-C++ type mappingsparallel_generate()- Parallel kernel generation with per-kernel progress logging- Arch-aware expansion helpers (
valid_wave_configs,valid_warp_configs, etc.)
Quick Start
GEMM
cd dispatcher/codegen
# Generate standard FP16 kernels
python3 unified_gemm_codegen.py \
--output-dir ../build/generated_kernels \
--datatype fp16 \
--layout rcr \
--variants standard
# Generate all variants
python3 unified_gemm_codegen.py \
--output-dir ../build/generated_kernels \
--variants standard preshuffle multi_d
Grouped Convolution
cd dispatcher/codegen
# Generate forward FP16 grouped conv kernels
python3 unified_grouped_conv_codegen.py \
--output-dir ../build/generated_kernels \
--datatype fp16 \
--variant forward \
--ndim-spatial 2
# Generate backward data kernels
python3 unified_grouped_conv_codegen.py \
--output-dir ../build/generated_kernels \
--variant backward_data \
--ndim-spatial 2
Using from Python
from ctypes_utils import CodegenRunner, KernelConfig
# Generate from specific config
config = KernelConfig(tile_m=256, tile_n=256, tile_k=64)
codegen = CodegenRunner()
result = codegen.generate_from_config(config)
# Generate variant
result = codegen.generate("preshuffle")
# Generate all
results = codegen.generate_all()
Command Line Options
| Option | Values | Description |
|---|---|---|
--output-dir |
path | Output directory |
--datatype |
fp16, bf16, fp32, int8 |
Data type |
--layout |
rcr, rrr, crr, ccr |
Matrix layouts |
--gpu-target |
gfx942, gfx90a, gfx950 |
Target GPU |
--variants |
standard, preshuffle, multi_d |
Kernel variants |
--preselected |
fp16_rcr_essential, etc. |
Predefined kernel set |
Layout Notation
R= Row-major,C= Column-major- Order: A, B, C (e.g.,
rcr= A row, B col, C row)
Variants
Standard
Basic GEMM: C = A x B
PreShuffle
Optimized weight access with LDS pre-shuffling. Best for large matrices.
Multi-D
Element-wise fusion: C = op(A x B + D0 + D1 + ...)
Supported ops: PassThrough, MultiDAdd, Relu, Gelu, Sigmoid, Tanh
Output Structure
generated_kernels/
|---- gemm_fp16_rcr_compv4_..._128x128x32_....hpp # GEMM kernels
|---- gemm_fp16_rcr_compv4_..._preshuffle.hpp
|---- gemm_fp16_rcr_compv4_..._multid_Relu_d1.hpp
|---- grouped_conv_fwd_fp16_nhwgc_..._128x128x32_....hpp # Grouped conv kernels
+---- ...
Configuration Files
arch_specs.json
GPU architecture specifications (single source of truth):
{
"architectures": {
"gfx942": {
"family": "cdna3",
"warp_size": 64,
"warp_configs": [[2, 2, 1], [4, 4, 1]],
...
}
}
}
preselected_kernels.py
Curated kernel sets for common use cases.
Adding New GPU Support
See ADDING_NEW_GPU.md for complete guide.
Quick steps:
- Edit
arch_specs.json - Run
python generate_arch_specs.py - Rebuild
Troubleshooting
| Issue | Solution |
|---|---|
| "Arguments not supported" | Check tile config validity |
| Missing element-wise op | Check elementwise_ops.hpp |
| Compilation errors | Verify C++17, include paths |
More info: See ../README.md for full documentation.