ck_tile: add gtest unit tests for MX flatmm (gfx950)
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
## Summary
- Add correctness unit tests for the MX-format flatmm kernel
(`example/ck_tile/18_flatmm/mxgemm`) under `test/ck_tile/flatmm/`
- Tests cover all five dtype combinations: FP4×FP4, FP8×FP8, FP6×FP6,
FP8×FP4, FP4×FP8
- Tests cover all four kernel dispatch paths (the `has_hot_loop` ×
`tail_num` product):
- `has_hot_loop=false, tail=ODD` (K=256, num_loop=1)
- `has_hot_loop=false, tail=EVEN` (K=512, num_loop=2)
- `has_hot_loop=true, tail=ODD` (K=768, num_loop=3)
- `has_hot_loop=true, tail=EVEN` (K=1024, num_loop=4)
- Remove unsupported `-split_k` CLI option from
`tile_example_mx_flatmm`; the pre-shuffled B layout is incompatible with
K-splitting and the option silently produced wrong results
## Changes
**New files (`test/ck_tile/flatmm/`):**
- `CMakeLists.txt` — builds 40 kernel instances as a shared OBJECT
library, links into 5 per-dtype test executables; forwards
`-DCK_TILE_USE_OCP_FP8` when `CK_USE_OCP_FP8` is ON
- `test_mx_flatmm_base.hpp` — base test fixture with
`run_test_with_validation(M, N, K, kbatch=1)`
- `test_mx_flatmm_fixtures.hpp` — concrete `TestMXFlatmm` typed test
class and type aliases
- `test_mx_flatmm_fp{4fp4,8fp8,6fp6,8fp4,4fp8}.cpp` — per-dtype
`TYPED_TEST_SUITE` files
**Modified files:**
- `example/ck_tile/18_flatmm/mxgemm/mx_flatmm_arch_traits.hpp` — moved
`preShuffleWeight` here (was in `mx_flatmm.cpp`) so it is includeable by
both the example and the tests
- `example/ck_tile/18_flatmm/mxgemm/mx_flatmm.cpp` / `run_mx_flatmm.inc`
— removed `-split_k` CLI arg, hardcoded `k_batch=1`, fixed `k_split`
formula, updated call sites after `preShuffleWeight` move
- `test/ck_tile/CMakeLists.txt` — added `add_subdirectory(flatmm)`
Stream-K GEMM Tile Engine Unit Tests
How It Works
This unit test system integrates tile_engine's kernel generation into automated testing:
- Uses tile_engine scripts directly: Same Python scripts that generate tile_engine kernels
- JSON-based configuration: Define test parameters in JSON files (like tile_engine)
- Build-time generation: CMake calls tile_engine scripts to generate kernel headers
- Individual test executables: Each kernel configuration becomes a separate test
- Tile_engine verification: Uses exact same error thresholds and validation as tile_engine
Tile Engine Integration
JSON Config → tile_engine Python scripts → Generated Headers → Test Executables
--list_kernels: Get available kernel configurations from JSON--gen_individual: Generate all kernel headers in parallel during CMake configuration--gen_single: Generate individual kernel header for each configuration- Same verification: Uses tile_engine's adaptive error thresholds and reference calculations
- Same patterns: Follows tile_engine's tensor initialization, stride calculation, and kernel launching
Config-Specific Test Parameters
Each test configuration can specify optimized problem sizes in its JSON file:
test_params.problem_sizes: Array of{m, n, k, split_k}configurations- CMake extraction:
extract_test_params.pygenerates config-specific test parameter files - Build integration: Each test target uses parameters appropriate for its kernel configuration
- Optimized testing: Different configs test different problem sizes that showcase their strengths
The key idea: Unit tests that use tile_engine's exact kernel generation and verification methodology instead of creating separate test infrastructure.
Test Configurations
Test configs are generated during the Generation Phase. They are stored under the build directory at test/ck_tile/gemm_streamk_tile_engine/configs. The Compute Unit (CU) count of the device is required to generate the configs. If the Generation Phase occurs on a machine without a GPU or does not contain same GPU architecture on which you will run the tests, you can manually set the CU count using the CU_COUNT option:
# Assuming you are at the root of the repo
cd build
../script/cmake-ck-dev.sh .. gfx90a -G Ninja -DCU_COUNT=100
You can reference the public whitepaper for your specific GPU to get the appropriate CU count.
If no CU_COUNT option is given and no HIP device is found, then the default value of 100 CUs will be used to determine the problem sizes tested.
1. Smoke Tests
- Purpose: Basic functionality validation for fp16/bf16/fp8/bf8 data types
- Config: 256x256x32 (for bf16/fp16) or 128x128x32 (for bf8/fp8), warp 2x2x1, warp_tile 32x32x16
- Traits: compv3 pipeline only
- Coverage: All 4 layouts (rcr, rrr, ccr, crr)
Data Type Support
- ✅ fp16, bf16, fp8, bf8: Fully supported - all layouts (rcr, rrr, ccr, crr)
- ❌ fp64: Not supported (hardware MFMA limitation)
- ⏳ fp32, pk-int4-t: Not yet supported by gemm_instance_builder (will be added later)
Test Result Behavior
Tests automatically handle unsupported configurations through runtime validation:
- PASSED: Kernel executed correctly with results within error thresholds ✅
- SKIPPED: Kernel validation returned "Arguments not supported" (expected for certain problem sizes/configurations) ⚠️
- FAILED: Actual error or incorrect computation results ❌
When a kernel's IsSupportedArgument() check fails (e.g., due to vector alignment requirements, dimension constraints, or padding limitations), the test is automatically skipped rather than failed. This allows comprehensive testing across various problem sizes while gracefully handling configurations that don't meet specific kernel requirements.