[CK] Streamk tile engine test not setting a reasonable CU_COUNT default when the query fails (#5165) ## Motivation The following error was coming up when compiling on Windows when the generate_configs.py file tries to query the GPU for the number of CU's: ``` [composable_kernel configure] -- Generating Stream-K test config files for fp16 [composable_kernel configure] Traceback (most recent call last): [composable_kernel configure] File "E:\TheRock\rocm-libraries\projects\composablekernel\test\ck_tile\gemm_streamk_tile_engine\generate_configs.py", line 277, in <module> [composable_kernel configure] main() [composable_kernel configure] ~~~~^^ [composable_kernel configure] File "E:\TheRock\rocm-libraries\projects\composablekernel\test\ck_tile\gemm_streamk_tile_engine\generate_configs.py", line 271, in main [composable_kernel configure] cu_count, configs_dir_path, tile_sizes, datatype = get_args() [composable_kernel configure] ~~~~~~~~^^ [composable_kernel configure] File "E:\TheRock\rocm-libraries\projects\composablekernel\test\ck_tile\gemm_streamk_tile_engine\generate_configs.py", line 267, in get_args [composable_kernel configure] return (int(args.cu_count), args.configs_dir_path, args.tiles, args.datatype) [composable_kernel configure] ~~~^^^^^^^^^^^^^^^ [composable_kernel configure] ValueError: invalid literal for int() with base 10: 'Exit code 0xc0000135\n' [composable_kernel configure] CMake Error at test/ck_tile/gemm_streamk_tile_engine/generate_configs.cmake:98 (message): [composable_kernel configure] Eror occured during execution of [composable_kernel configure] E:/TheRock/rocm-libraries/projects/composablekernel/test/ck_tile/gemm_streamk_tile_engine/generate_configs.py [composable_kernel configure] Call Stack (most recent call first): [composable_kernel configure] test/ck_tile/gemm_streamk_tile_engine/CMakeLists.txt:301 (generate_test_configs) [composable_kernel configure] [composable_kernel configure] [composable_kernel configure] -- Configuring incomplete, errors occurred! [composable_kernel configure FAILED WITH CODE 1 in 41 seconds] ninja: build stopped: subcommand failed. ``` ## Technical Details There was one major problem in the following code and two changes were made: ``` execute_process( COMMAND ${CPP_EXE_PATH} OUTPUT_STRIP_TRAILING_WHITESPACE ERROR_VARIABLE standard_error RESULT_VARIABLE queried_cu_count ) if (standard_error) message(STATUS "Error information from attempting to query HIP device and properties:\n" "${standard_error}") endif() ``` 1. RESULT_VARIABLE does not capture the IO output of the executable, but rather the exit code. You can see from the error output here that it was trying to cast "Exit code 0xc0000135\n" to an integer. I fixed this by changing RESULT_VARIABLE to OUTPUT_VARIABLE. ``` [composable_kernel configure] ValueError: invalid literal for int() with base 10: 'Exit code 0xc0000135\n' ``` Note that this also gives us the reason that the query failed: Exit code 0xc0000135, which needs to be addressed in a separate issue: "Exit code 0xc0000135, also seen as -1073741515, is a Windows error indicating that an application failed to start because a required Dynamic Link Library (DLL) file or a system component like the .NET Framework is missing or corrupted" It's likely the executable that is created from this code can't find the hip dll, or something similar: ``` set(CPP_FILE_PATH ${CMAKE_CURRENT_SOURCE_DIR}/cu_count.cpp) set(CPP_EXE_PATH ${CMAKE_CURRENT_BINARY_DIR}/cu_count) execute_process( COMMAND ${CMAKE_HIP_COMPILER} -x hip ${CPP_FILE_PATH} -o ${CPP_EXE_PATH} RESULT_VARIABLE compile_result ) ``` 2. For clarity and consistency purposes, I changed the check afterwards to explicitly look for a non-zero exit code. This matches previous checks in the cmake file. I also added improved error checking when the query for the cu count fails. ## Test Plan Ensure it compiles locally and existing CI isn't impacted. ## Test Result Waiting on CI. ## Submission Checklist - [ x ] Look over the contributing guidelines at https://github.com/ROCm/ROCm/blob/develop/CONTRIBUTING.md#pull-requests.
Stream-K GEMM Tile Engine Unit Tests
How It Works
This unit test system integrates tile_engine's kernel generation into automated testing:
- Uses tile_engine scripts directly: Same Python scripts that generate tile_engine kernels
- JSON-based configuration: Define test parameters in JSON files (like tile_engine)
- Build-time generation: CMake calls tile_engine scripts to generate kernel headers
- Individual test executables: Each kernel configuration becomes a separate test
- Tile_engine verification: Uses exact same error thresholds and validation as tile_engine
Tile Engine Integration
JSON Config → tile_engine Python scripts → Generated Headers → Test Executables
--list_kernels: Get available kernel configurations from JSON--gen_individual: Generate all kernel headers in parallel during CMake configuration--gen_single: Generate individual kernel header for each configuration- Same verification: Uses tile_engine's adaptive error thresholds and reference calculations
- Same patterns: Follows tile_engine's tensor initialization, stride calculation, and kernel launching
Config-Specific Test Parameters
Each test configuration can specify optimized problem sizes in its JSON file:
test_params.problem_sizes: Array of{m, n, k, split_k}configurations- CMake extraction:
extract_test_params.pygenerates config-specific test parameter files - Build integration: Each test target uses parameters appropriate for its kernel configuration
- Optimized testing: Different configs test different problem sizes that showcase their strengths
The key idea: Unit tests that use tile_engine's exact kernel generation and verification methodology instead of creating separate test infrastructure.
Test Configurations
Test configs are generated during the Generation Phase. They are stored under the build directory at test/ck_tile/gemm_streamk_tile_engine/configs. The Compute Unit (CU) count of the device is required to generate the configs. If the Generation Phase occurs on a machine without a GPU or does not contain same GPU architecture on which you will run the tests, you can manually set the CU count using the CU_COUNT option:
# Assuming you are at the root of the repo
cd build
../script/cmake-ck-dev.sh .. gfx90a -G Ninja -DCU_COUNT=100
You can reference the public whitepaper for your specific GPU to get the appropriate CU count.
If no CU_COUNT option is given and no HIP device is found, then the default value of 100 CUs will be used to determine the problem sizes tested.
1. Smoke Tests
- Purpose: Basic functionality validation for fp16/bf16/fp8/bf8 data types
- Config: 256x256x32 (for bf16/fp16) or 128x128x32 (for bf8/fp8), warp 2x2x1, warp_tile 32x32x16
- Traits: compv3 pipeline only
- Coverage: All 4 layouts (rcr, rrr, ccr, crr)
Data Type Support
- ✅ fp16, bf16, fp8, bf8: Fully supported - all layouts (rcr, rrr, ccr, crr)
- ❌ fp64: Not supported (hardware MFMA limitation)
- ⏳ fp32, pk-int4-t: Not yet supported by gemm_instance_builder (will be added later)
Test Result Behavior
Tests automatically handle unsupported configurations through runtime validation:
- PASSED: Kernel executed correctly with results within error thresholds ✅
- SKIPPED: Kernel validation returned "Arguments not supported" (expected for certain problem sizes/configurations) ⚠️
- FAILED: Actual error or incorrect computation results ❌
When a kernel's IsSupportedArgument() check fails (e.g., due to vector alignment requirements, dimension constraints, or padding limitations), the test is automatically skipped rather than failed. This allows comprehensive testing across various problem sizes while gracefully handling configurations that don't meet specific kernel requirements.