mirror of
https://github.com/ROCm/composable_kernel.git
synced 2026-05-12 01:10:17 +00:00
[CK] [CK_Tile] Add GroupConv to Kernel Dispatcher ## Motivation This PR adds CK Tile group convolution (forward, backward-data, backward-weight) support to the kernel dispatcher, matching and unifying with the existing dispatcher GEMM infrastructure in architecture and usability. The dispatcher provides a unified kernel dispatch system with both C++ and Python frontends, and until now only supported GEMM operations. This PR enables framework integrators to use the same declarative kernel workflow for convolutions as they do for GEMM: declare kernels, build a registry JIT, select kernels within the registry at runtime, and dispatch to GPU. Future PRs will include runtime kernel selection heuristics for autotuning of kernel parameters based on (problem, hardware arch). ## Technical Details Grouped convolution support has been added to the CK Tile Dispatcher with generated_conv_backend.hpp enabling dispatcher.run(in, wei, out, problem) for all 6 conv variants (fwd/bwdd/bwdw x 2D/3D), runtime heuristic kernel selection, and GroupedConvKernelKey with full ConvConfigBase fields. Python side adds parallel JIT via registry.build(max_workers) and heuristic registry.select(). Includes 7 C++ and 6 Python examples covering all directions with CPU reference validation, and shared infrastructure improvements (BaseRegistry CRTP, structured exceptions). As a sanity check, JIT compile times for a single kernel remains the same and for multiple kernels there is better parallelism: Kernels | 1 worker | 8 workers 1 | 7.7 s | 7.7 s 2 | 15.9 s | 8.2 s 4 | 33.4 s | 9.7 s 6 | 52.3 s | 10.2 s ## Test Plan 145 ephemeral unit tests have been added to test basic functionality. All 30 examples/integration tests run end-to-end on gfx950 (MI350): 7 C++ conv, 7 C++ GEMM, 6 Python conv, 10 Python GEMM. CPU reference validation for forward, backward-data, and backward-weight (2D) in both C++ and Python examples pass. ## Test Result 30 examples pass. Peak performance: 132 TFLOPS (Batch-32 forward 56x56), 53 TFLOPS (pointwise 1x1). CPU reference accuracy: max_abs_diff < 0.002 for all directions (fp16 vs fp32 reference). ## Submission Checklist - [x] Look over the contributing guidelines at https://github.com/ROCm/ROCm/blob/develop/CONTRIBUTING.md#pull-requests.
4.7 KiB
4.7 KiB
Adding New GPU Architecture Support
Guide for adding support for a new AMD GPU architecture to the CK Tile Dispatcher.
See also: Main Dispatcher README | Codegen README
Overview
The dispatcher uses arch_specs.json as the single source of truth for GPU specifications:
arch_specs.json -> generate_arch_specs.py -> arch_specs_generated.py (Python)
-> arch_specs_generated.hpp (C++)
Quick Start
# 1. Edit arch_specs.json
# 2. Run generator
python generate_arch_specs.py
# 3. Rebuild
cd ../build && cmake --build . -j8
# 4. Test
ctest
Step-by-Step Guide
Step 1: Edit arch_specs.json
Add new architecture under "architectures":
{
"architectures": {
"gfx1100": {
"family": "rdna3",
"description": "AMD Radeon RX 7000 series (RDNA3)",
"warp_size": 32,
"lds_capacity_kb": 64,
"warp_configs": [
[2, 4, 1],
[4, 2, 1]
],
"warp_tile_combos": {
"fp16_fp16_fp16": [[16, 16, 16], [32, 32, 16]],
"bf16_bf16_bf16": [[16, 16, 16], [32, 32, 16]]
}
}
}
}
Step 2: Configuration Fields
| Field | Description | Example |
|---|---|---|
family |
GPU family | "cdna3", "rdna4" |
description |
Human-readable name | "AMD Instinct MI300" |
warp_size |
Wave/warp size | 64 (CDNA), 32 (RDNA) |
lds_capacity_kb |
LDS memory in KB | 64 |
warp_configs |
Valid [warp_m, warp_n, warp_k] |
[[2,2,1], [4,4,1]] |
warp_tile_combos |
Warp tiles per dtype | See below |
Step 3: Warp Tile Combinations
Map data type combinations to valid warp tile sizes:
"warp_tile_combos": {
"fp16_fp16_fp16": [[32, 32, 8], [16, 16, 16], [32, 32, 16]],
"bf16_bf16_bf16": [[32, 32, 8], [16, 16, 16]],
"fp8_fp8_fp16": [[32, 32, 16], [32, 32, 32]],
"int8_int8_int32": [[16, 16, 32], [32, 32, 16]]
}
Key format: {A_dtype}_{B_dtype}_{C_dtype}
Step 4: Run Generator
cd dispatcher/codegen
python generate_arch_specs.py
This generates:
arch_specs_generated.py(Python module)../include/ck_tile/dispatcher/arch_specs_generated.hpp(C++ header)
Step 5: Rebuild and Test
cd ../build
cmake --build . -j8
ctest --output-on-failure
Step 6: Verify
from arch_filter import ArchFilter
filter = ArchFilter("gfx1100")
is_valid = filter.is_kernel_valid(
datatype_a="fp16", datatype_b="fp16", datatype_c="fp16",
tile_m=128, tile_n=128, tile_k=32,
warp_m=2, warp_n=2, warp_k=1,
warp_tile_m=16, warp_tile_n=16, warp_tile_k=16
)
print(f"Valid: {is_valid}")
Reference
Supported Data Types
| Key | Description |
|---|---|
fp16 |
Half precision (16-bit) |
bf16 |
Brain float 16 |
fp32 |
Single precision (32-bit) |
fp64 |
Double precision (64-bit) |
fp8 |
8-bit float (E4M3) |
bf8 |
8-bit brain float (E5M2) |
int8 |
8-bit integer |
int4 |
4-bit integer |
GPU Families
| Family | Description |
|---|---|
cdna2 |
MI200 series (gfx90a) |
cdna3 |
MI300 series (gfx942) |
cdna4 |
MI350 series (gfx950) |
rdna3 |
RX 7000 series (gfx1100) |
rdna4 |
RX 9000 series (gfx1201) |
Pipeline LDS Limits
| Pipeline | LDS Limit |
|---|---|
compv4 |
32 KB |
preshufflev2 |
32 KB |
default |
64 KB |
Troubleshooting
"Unknown GPU architecture"
- Check architecture key matches exactly (e.g.,
"gfx942"not"GFX942") - Verify you ran
generate_arch_specs.py - Rebuild C++ code
Kernels being rejected
from arch_filter import ArchFilter, KernelConfig
filter = ArchFilter("gfx942")
result = filter.validate_kernel(config)
print(f"Valid: {result.valid}")
for error in result.errors:
print(f" Error: {error}")
Missing warp tile combination
- Check
warp_tile_combosinarch_specs.json - Ensure
[warp_tile_m, warp_tile_n, warp_tile_k]is in the list - Verify data type key format
File Structure
codegen/
|---- arch_specs.json # Single source of truth (EDIT THIS)
|---- generate_arch_specs.py # Generator script
|---- arch_specs_generated.py # Generated Python module
+---- ADDING_NEW_GPU.md # This file
include/ck_tile/dispatcher/
|---- arch_specs_generated.hpp # Generated C++ header
+---- arch_filter.hpp # C++ filter
Best Practices
- Test thoroughly - Run all tests after adding a new GPU
- Start minimal - Add only validated configurations
- Document sources - Note where warp tile combinations came from
- Keep in sync - If using tile_engine, keep both updated
More info: See ../README.md for full documentation.