Commit Graph

2393 Commits

Author SHA1 Message Date
linqunAMD
dde91b60fb [CK] Fix example_grouped_conv_bwd_data_xdl_fp16 with ksplit = 2 (#2943)
root cause:  AK1 and BK1 may different in class template. so we need calculate k0 per block separately when ksplit is not 1.

[ROCm/composable_kernel commit: 769c58f133]
2025-09-29 07:56:33 -07:00
Bartłomiej Kocot
1d9ec09cf2 Grouped Conv Bwd Data out index calculation optimizations (#2917)
* Grouped Conv Bwd Data index calculation optimizations

* fixes

* refactor instances

* gfx12 fixes

* temporary disable splitK for gfx12

[ROCm/composable_kernel commit: 5477811670]
2025-09-29 15:59:11 +02:00
SamiAario-AMD
0da205766b [CK_TILE] Fixing Type Conversions in PassThroughPack8 (#2769)
* Change the return type of run_gemm_combinations in the basic tests

* Change the return type of run_gemm_combinations in the universal tests

* Add universal GEMM tests for bf16 x pk_i4 and fp16 x pk_i4

* Add universal GEMM test for fp8 x pk_i4

* Add basic GEMM tests for bf16 x pk_i4, fp16 x pk_i4 and fp8 x pk_i4.

* Add missing GemmTypeConfig<ck_tile::fp8_t, ck_tile::pk_int4_t, ck_tile::half_t>

* Add missing GemmTypeConfig<ck_tile::bf16_t, ck_tile::pk_int4_t, ck_tile::bf16_t>

* No need for utility in test_ck_tile_elementwise_1d

* Fix conversion from pk_int4x4_t to bf16x8_t in PassThroughPack8

* Avoid union-based type punning in float_to_bf16_truc_raw to make it constexpr compliant

* For consistency also make float_to_bf16_truc_nan_raw constexpr compliant by removing the union

* Use a static_cast to bfloat16_t only when CK_TILE_USE_LLVM_BUILTIN_BF16 is enforced

* Convert from float to bf16 during compilation rather than using magic values

* Fix conversion from pk_int4x4_t to fp8x8_t in PassThroughPack8

* Comment out the basic test for fp16 x pk_i4 as it does not pass

* Add missing GemmTypeConfig<ck_tile::bf8_t, ck_tile::pk_int4_t, ck_tile::half_t>

* Fix conversion from pk_int4x4_t to bf8x8_t in PassThroughPack8

* Add basic and universal GEMM tests for bf8 x pk_i4

* Switch back to amd_assembly_i4_to_fp8x8 in PassThroughPack8 as it works now

* Switch back to amd_assembly_i4_to_bf8x8 in PassThroughPack8 as it works now

* Remove the inefficient fallbacks for fp8 and bf8 in elementwise/unary_element_wise_operation.hpp

* Use explicit macros for enabling and disabling the the constexpr lookup based converters

* Fix two failing tests

* Avoid union-based type punning in float_to_bf16_rtn_raw to make it constexpr compliant

* Use float_to_bf16_rtn_raw instead of float_to_bf16 to create the bf16 lookup table for use in conversions from pk_int4 to bf16

* On ROCm 7.0.1 we need an explicit cast to from uint16_t to bf16_t

[ROCm/composable_kernel commit: 0f10e6d921]
2025-09-29 13:34:47 +03:00
John Afaganis
867351e019 Use git ls-files to select candidate files for clang format
This change ensures that the files being selected for clang format validation are exactly the ones tracked by the git repo we are testing.  This protects against an known issue where the repo being tested contained "stray files" from a previous test.

[ROCm/composable_kernel commit: e8842e3c1f]
2025-09-27 15:47:31 -06:00
Anton Gorenko
bc9362af55 [CK_TILE] Support f32 in FMHA (fwd and bwd) (#2836)
* Support 16x16 (MFMA, WMMA) and 32x32 (MFMA) tiles in fwd and bwd BlockDropout

Add comments with dropout implementation details

Fix performance regression of fwd+dropout

    * Remove some usage of type punning (reinterpret_cast with ref or ptr) in Philox;
    * "scalarize" seed and offset, they may come either from kernel args or from device memory
      (presumably loaded with vector loads).

    These changes help the compiler to procude more optimal code and reduce register spilling.

Use WarpGemmDispatcher instead of explicit WarpGemmMfma... to get  CWarpDstrEncoding

Use code based on BlockDropout in BlockDropoutBwd

Refactor BlockDropout (fwd)

Implement BlockDropout (fwd) for WMMA

    Originally BlockDropout only supported 32x32 tiles (IsWG32 = true),
    this version supports 16x16 tiles.
    If MPerBlock > MWarp * 16, it can generate numbers for two 16x16 tiles, similarly
    to BlockDropoutBwd.

Implement BlockDropoutBwd for WMMA

Remove MakeRandValLds* functions unused in BlockDropoutBwd

Remove unused Run overload from BlockDropoutBwd

* Fix regression with philox seed and offset when they exceed 32-bit int

__builtin_amdgcn_readfirstlane works with 32-bit values, seed and offset
are 64-bit so they get truncated.

* Add F32 MFMA warp gemms

* Support f32 in fwd FMHA

* Implement transpose_vectors for 4-byte types (float)

* Fix unexpected implicit f32->uint32 cast in buffer_store<4>

__builtin_amdgcn_raw_buffer_store_b32 expects unsigned int but float was passed (implicitly casted to uint).
mbuf_t types in other buffer_store<> are changed for consistency.

* Support F32 in bwd FMHA

hdim = 256 is disabled for now because it uses too much memory on gfx90a

* Support Headdim = 48 (divisible by 16) in fwd

* Add fp32-specific receipts (800 and 801)

* Tune fwd tiles

* Tune bwd tiles

* Use small tiles only for small seqlen_q

* Fix after rebasing

* Fix selection of a fallback tile based on bm0

The assumption that the largest bm0 == 128 is not always true for
current fp32 tiles.

* Remove constraints and adjust filtering for fp32

Custom constraints are no longer needed because now the smallest tile
is selected automtically based on seqlen_q.
Filters related to qr_async_trload disabled valid fp32 tiles.

* Add fp32 tests

* Make splitkv and appendkv compile for fp32 only

There are no instances yet, but API still must compile when only fp32 is
requested.

* Remove unimportant f32 instances

* Add test_ck_tile_fmha_*_fp32 to REGRESSION_TESTS

* Replace magic numbers with a constant, improve comments for dropout

* Update changelog

* Fix condition that dq_acc must be set to zero when mask is used

The change was introduced in #2799

* Replace warp_uniform with recently added amd_wave_read_first_lane

* Add hdim = 96 and 192 to fwd

[ROCm/composable_kernel commit: 1edd250115]
2025-09-27 18:03:48 +05:00
Anton Gorenko
4158d33735 [CK_TILE] FMHA Fix synchronization issue in FWD splitkv combine pipeline (#2934)
* Fix validation of rotary embedding with time_kernel_

When rotary embedding is used, the appendkv kernel modifies the q tensor
(multiple times when time_kernel_ is set). We need to reset the q buffer
and rerun all kernels.

* Fix synchronization issue in splitkv combine pipeline

Different warps can read and then rewrite the same values of lse_acc_lds.
Sometimes warps progress at different speeds, one warp can rewrite
values that are still being read by another warp.

Running the tests multiple times and, preferably, with multiple
processes on the same GPU helps to trigger this issue:

bin/test_ck_tile_fmha_fwd_fp16 --gtest_repeat=-1 --gtest_shuffle --gtest_throw_on_failure --gtest_filter="TestCkTileFmhaFwd/*KV*"

[ROCm/composable_kernel commit: c6bfd97c2d]
2025-09-27 08:16:10 +05:00
emezh
2c114d7ccc fix copy-paste bug in get_matrix_b; re-enable all tests in multi_abd (#2939)
[ROCm/composable_kernel commit: 2aa06fbd45]
2025-09-26 22:55:18 -04:00
lalala-sh
f17f16e8ec fix wp gemm bug when permuteN is false (#2935)
* fix wp gemm bug when permuteN is false

* code clean

---------

Co-authored-by: valarLip <340077269@qq.com>

[ROCm/composable_kernel commit: ee9769616a]
2025-09-26 13:28:54 -07:00
Aviral Goel
e98edd3322 Integrate Multi D GEMMs into Grouped GEMMs along with unit tests (#2923)
* feat(grouped_gemm_multi_d): add new example that integrates grouped_gemm and multi_d_gemm feature

* feat: generalized grouped_gemm_kernel.hpp

* feat: generalized grouped_gemm_kernel.hpp even further by removing hardcoded 0

* refactor: grouped_gemm_multi_d relies on grouped_gemm_kernel

* tests(grouped_gemm): grouped_gemm test suite passes with minor adjustments

* fix: segfault fix by passing correct parameters for d tensors

* docs: add multi d info and trim down outdated content

* tests: add unit tests for grouped_gemm_multi_d and minor changes in grouped_gemm related test for compatibility

* style: clang format

* fix: incorrect validation method and Dtensor layout in test suite

[ROCm/composable_kernel commit: a44bea45b2]
2025-09-26 09:59:58 -07:00
Geo Min
f7bd8c1634 [TheRock CI] Adding MIOpen at HEAD (#2929)
* Adding MIOpen at HEAD

* Adding container and also adding CI run for .github paths

* Adding correct flags

* Adding patches

* Adding exception for ck

* rocm-libraries at new path

* adding global safe dir

* reorder

* Fixing paths

* Adding sharding

[ROCm/composable_kernel commit: e40c0acef2]
2025-09-26 09:08:15 -07:00
rahjain-amd
a8ff84ed1b Disable Rapid Json to be used by Default (#2936)
To enable the json dump we can now build with -DCK_ENABLE_JSON_DUMP=1

[ROCm/composable_kernel commit: e92e69318e]
2025-09-26 09:05:35 -07:00
Christopher Millette
3f55777e66 Update CODEOWNERS
[ROCm/composable_kernel commit: f92b3c7a1e]
2025-09-26 09:41:33 -06:00
Yi DING
a3296e00b8 [CK_TILE] FMHA BWD Pad HDim to a Multiple of 8 (#2918)
[ROCm/composable_kernel commit: 32773fe5cb]
2025-09-26 16:42:59 +08:00
Jeff Huang
00b80bb4a1 Add sequence padding and variable length support in fmha (#2932)
* * [CK_TILE] Add sequence padding and variable length support in fmha (and v3)

 - Group Mode Padding: Introduces the `-s_qpad` argument to support
   physically padded layouts. Kernels now use padded start pointers
   (`seqstart_padded_*_ptr`) for memory addressing.

 - Batch Mode Variable Length: Adds `-q_eff_lens` and `-kv_eff_lens`
   arguments for efficient processing of variable-length sequences by
   passing cumulative effective lengths (`cu_seqlen_*_ptr`) to the kernel.

 - FMHA examples: Support padding and variable length both in
   group and batch mode. Dispatcher is updated as well (dispatch to
   kPadSeqLenK enabled pipeline).

 - New padding test cases: Add padding test cases to `smoke_test_fwd.sh` and
   `test_fmha_fwd.inc`, and add benchmarks to `benchmark_fwd.sh` and
   `benchmark_fwd_v3.sh` as well. These test cases and benchmarks that
   specifically validate/benchmark the new padding and variable-length
   functionalities in both group and batch modes.

* [CK_TILE] Fix build error in fmha unit tests

* [CK_TILE] add mqa, gqa to sequence padding unit tests

* [CI_TILE] Reduce the number of padding seqlen unit tests in FMHA to avoid timeouts in CI

* [CK_TILE] remove unnecessary MageKArgs overload in FmhaFwdV3Kernel and FmhaFwdKernel

[ROCm/composable_kernel commit: 518d24e662]
2025-09-26 12:36:27 +08:00
kyle-256
cf9fbe9f20 use inline function in hpp (#2922)
[ROCm/composable_kernel commit: b0a2d99d10]
2025-09-25 18:29:26 -07:00
emezh
f628be2ed1 Verify HostTensorDescriptor when it is created (#2829)
* add proper GEMM layout verification

* Handle "auto" strides.

CalculateStrides only called when tensor's strides are empty or all of them are <=0 (auto strides).
CalculateStrides now supports GEMM::ColumnsMajor order. The assumption is still that it applies only to the inner two dims.
ValidateStrides throws if any of the tensor's strides is <=0.
profile_gemm_multiply_add updated to support "auto" strides for tensors.

Manual tests for profile_gemm_multiply_add (matrix B in Row and Col modes)
auto-strides
	bin/ckProfiler gemm_multiply_add 0 0 1 1 0 1 128 128 128 0 0 0 0 0
	bin/ckProfiler gemm_multiply_add 0 1 1 1 0 1 128 128 128 0 0 0 0 0
	bin/ckProfiler gemm_multiply_add 0 0 1 1 0 1 128 128 128 -1 -1 -1 -1 -1
Note, -1 should be deprecated (use 0 instead)

explicit strides (same as auto)
	bin/ckProfiler gemm_multiply_add 0 0 1 1 0 1 128 128 128 128 128 128 128 128
	bin/ckProfiler gemm_multiply_add 0 1 1 1 0 1 128 128 128 128 128 128 128 128

explicit strides (not the same as auto)
	bin/ckProfiler gemm_multiply_add 0 0 1 1 0 1 128 128 128 130 132 134 136 138
	bin/ckProfiler gemm_multiply_add 0 1 1 1 0 1 128 128 128 130 132 134 136 138

mix of explicit and auto strides
	bin/ckProfiler gemm_multiply_add 0 0 1 1 0 1 128 128 128 128 128 128 128 0

invalid stride
	bin/ckProfiler gemm_multiply_add 0 0 1 1 0 1 128 128 128 0 0 0 0 64
	terminate called after throwing an instance of 'std::runtime_error'
	  what():  Invalid strides for RowMajor: mLens: 128 128 , mStrides: 64 1
	Aborted (core dumped)

* - add more names to ck::tensor_layout for easier namespace hierarchy checking
- updated convolutional layouts to use explicit ones or BaseConvolutionalLayout where it is not clear which layout to use (TBD) - see include/ck/library/utility/convolution_host_tensor_descriptor_helper.hpp

* added handling of partially initialized strides for GEMM. fixed more tests.

* clang-format and more fixes

* replace long dash by a simple hyphen - causes build failure in CK codegen.

* increase sizeof input, otherwise output size becomes zero or negative with large filter size

* select stride based on layout

* specify layout explicitly to avoid errors in HostTensorDescriptor creation

* add validation for higher GEMM tensor dimensions.; Add docstring to `HostTensorDescriptor`

* Not clear why permute test in test/permute_scale/test_permute_scale.cpp uses a lot of invalid strides. Setting layout to BypassLayoutVerification to avoid a lot of errors

* fix test (incl removing invalid config)

* fix moe examples:
- (in .cpp) add layout argument to non-2D tensors
- (in .hpp) fix asserts/failures that show up in Debug mode, specifically addressing 2D tensor by a single index (and 3D tensor by 2d index)

* fix moe_gemm2 example.

* fix profile and wmma examples

* clean-up early mods for ckprofile. verified with:
```
ckProfiler gemm_multiply_add 0 0 1 1 0 1 128 128 128 0 0 0 0 0
ckProfiler gemm_multiply_add 0 1 1 1 0 1 128 128 128 0 0 0 0 0
ckProfiler gemm_multiply_add 0 0 1 1 0 1 128 128 128 130 132 134 136 138
ckProfiler gemm_multiply_add 0 1 1 1 0 1 128 128 128 130 132 134 136 138
#
ckProfiler gemm_fastgelu 1 0 1 2 0 1 128 128 128 0 0 0
ckProfiler gemm_fastgelu 1 1 1 2 0 1 128 128 128 0 0 0
ckProfiler gemm_fastgelu 1 2 1 2 0 1 128 128 128 0 0 0
ckProfiler gemm_fastgelu 1 3 1 2 0 1 128 128 128 0 0 0
ckProfiler gemm_fastgelu 1 0 1 2 0 1 128 128 128 128 128 128
#
ckProfiler gemm_add_relu 0 0 1 1 0 1 128 128 128 0 0 0 0
# ckProfiler gemm_add_relu 0 1 1 1 0 1 128 128 128 0 0 0 0    # not implemented
# ckProfiler gemm_add_relu 0 2 1 1 0 1 128 128 128 0 0 0 0    # not implemented
# ckProfiler gemm_add_relu 0 3 1 1 0 1 128 128 128 0 0 0 0    # not implemented
ckProfiler gemm_add_relu 0 0 1 1 0 1 128 128 128 128 128 128 128
#
ckProfiler gemm_add_relu_add_layernorm 1 0 1 1 0 0 128 128 128 0 0 0 0 0
ckProfiler gemm_add_relu_add_layernorm 1 1 1 1 0 0 128 128 128 0 0 0 0 0
ckProfiler gemm_add_relu_add_layernorm 1 2 1 1 0 0 128 128 128 0 0 0 0 0
ckProfiler gemm_add_relu_add_layernorm 1 3 1 1 0 0 128 128 128 0 0 0 0 0
ckProfiler gemm_add_relu_add_layernorm 1 0 1 1 0 0 128 128 128 130 132 134 136 138
#
example_gemm_add_multiply_dl_fp16
example_gemm_add_multiply_xdl_fp16
#
ckProfiler gemm_blockscale_wp 7 1 1 1 1 0 1 128 128 128 0 0 0
ckProfiler gemm_blockscale_wp 7 1 1 1 1 0 1 128 128 128 128 128 128
```

* temporary skip first 8 test configs - they throw error

* temporary skip first 8 test configs in wmma too - they throw error

---------

Co-authored-by: Illia Silin <98187287+illsilin@users.noreply.github.com>

[ROCm/composable_kernel commit: db2524be2d]
2025-09-25 18:22:13 -07:00
Illia Silin
e94b2f02ac Enable CI on gfx1100 (#2930)
* run CI on different versions of gfx11

* do not use gfx1151 systems

[ROCm/composable_kernel commit: ec4d16b991]
2025-09-25 16:10:54 -07:00
Illia Silin
768e496178 use default docker for build/test on gfx950 (#2928)
[ROCm/composable_kernel commit: 8c1a959913]
2025-09-25 10:40:45 -07:00
Cong Ma
30673dba81 Congma/ck tile/remove cpp 20 code (#2873)
* Remove C++20 code

C++20 features should not be used in CK. Remove all C++20 code.

* fix c++17 build

* format

* fix merge issue

---------

Co-authored-by: Thomas Ning <Thomas.Ning@amd.com>
Co-authored-by: Max Podkorytov <4273004+tenpercent@users.noreply.github.com>

[ROCm/composable_kernel commit: a5d1e25ec7]
2025-09-25 10:34:28 -07:00
Khushbu Agarwal
9ed178a93e Fix for Add the API to load SGPR (#2913)
* Revert "Revert "[CK-Tile] Add the API to load SGPR  (#2878)" (#2904)"

This reverts commit 4c78cc31c5b8e0c9db09c24fa35393f603a8a47f.

* Fix: sgpr minor issue

* cyclic dependency resolved

* clang formatted

* removing unused variable

* clang formatted

---------

Co-authored-by: Illia Silin <98187287+illsilin@users.noreply.github.com>

[ROCm/composable_kernel commit: b56e5d1d79]
2025-09-25 10:32:42 -07:00
Illia Silin
ff8105704b Add AITER test_mha_varlen (#2927)
* add aiter test_mha_varlen

* don't fail until all aiter test run

* use the original way to run tests, just add new test

[ROCm/composable_kernel commit: 64e61b8647]
2025-09-25 10:00:20 -07:00
Illia Silin
9aa69136e7 fix clang format (#2926)
[ROCm/composable_kernel commit: 9f6fc9fe09]
2025-09-25 09:35:35 -07:00
Jobbins
62d78c7bba [Jenkins] Remove 'Jenkins - ' prefix (#2920)
The prefix is causing the status updates from
gitStatusWrapper to be unique to the status updates that
are created by the Jenkins server, which creates duplicates

[ROCm/composable_kernel commit: 929291741d]
2025-09-25 09:08:29 -06:00
ltqin
712cbfb304 fix fmha fwd kernel name (#2880)
* fix fmha fwd kernel name

* if the input and output types are the same, keep the original code

[ROCm/composable_kernel commit: ab22f91a7c]
2025-09-24 20:00:10 -07:00
yinglu
9cb95d4bc2 Conv:TF32: add more instances - 1 (#2867)
* conv:tf32:add more instances
* add instances of device_grouped_conv_fwd_xdl_f32_comp_instances
* add instances of device_grouped_conv_fwd_xdl_f32_tf32_mem_instances
* add instances of device_grouped_conv_fwd_xdl_large_tensor_f32_tf32_instances
* remove gnhwc/ngchw/ngcdhw instances

[ROCm/composable_kernel commit: df97a286d5]
2025-09-25 09:27:18 +08:00
linqunAMD
e338ee5004 [CK] Fix misc issues in CK examples (#2890)
* [CK] Fix misc CK issues

* revert fp8 change, it causes CI fail.

* resubmit fp8 change

[ROCm/composable_kernel commit: f076f207ce]
2025-09-24 11:28:20 -07:00
Illia Silin
c143f0305c Upgrade to ROCm7.0.1 compiler. (#2909)
* upgrade default docker to rocm7.0.1

* turn on build and test on gfx950 by default

* use rocm-dev instead of rocm

* link libhiprtc for codegen targets

* resolving codegen compilation errors: removed calls to other std functions, resolved issues with int32_t: needed the correct header, put use of e8m0 into header guards

---------

Co-authored-by: Astha Rai <astha.rai713@gmail.com>

[ROCm/composable_kernel commit: 8fe3838c65]
2025-09-24 10:00:53 -07:00
Yi DING
443f2e41fd [CK_TILE] FMHA BWD Add D96 Instances (#2916)
[ROCm/composable_kernel commit: fe0a47a011]
2025-09-24 17:04:23 +08:00
Johannes Graner
beb87960ad [CK Tile] Implement Invoker pattern for remaining grouped convolution examples (#2894)
* Invoker for grouped_conv_fwd

* Invoker for grouped_conv_bwd_data

* Fix incorrect out layout identifier

[ROCm/composable_kernel commit: 15fff74503]
2025-09-24 10:22:38 +02:00
Jingwei Liao
d5b5e4ef95 add fmha dtype fp32 (#2914)
[ROCm/composable_kernel commit: 6805684788]
2025-09-24 15:28:39 +08:00
Sami Remes
c5a3d4c765 [CK_TILE] Fix cshuffle epilogue issue with IsLoadableTile (#2903)
* Fix issue with constexpr checks in scaling/cshuffle

* Remove IsLoadableTile

* Move amd_wave_read_first_lane before first usage

[ROCm/composable_kernel commit: dcd33a6ecc]
2025-09-23 23:08:18 -07:00
Thomas Ning
bdea637a15 Fix the gfx950 numerical errors (#2911)
* Update grouped_gemm example and pipeline

* find the root cause error in did not enable the transpose in gfx950 correctly

* Fix v3 pipeline, row and col major

* Disable f8 datatype tests, it fails on gfx950

* fix the abd test by clear the runtime argument unsupported

---------

Co-authored-by: AviralGoelAMD <aviral.goel@amd.com>
Co-authored-by: Mateusz Ozga <mateusz.ozga@amd.com>

[ROCm/composable_kernel commit: b159841a06]
2025-09-23 22:54:52 -07:00
asleepzzz
651a5dd0b9 Revert "[CK-Tile] Add the API to load SGPR (#2878)" (#2904)
This reverts commit 3e008a2d22ad1ba8a9b2c7eca369a8593b7d6e95.

[ROCm/composable_kernel commit: f161b5b738]
2025-09-23 14:33:51 -07:00
Haocong WANG
173427d877 [FMHA FWD] gfx950 Accuracy enhancement & bug fix (#2900)
* disable cast_tile_pk_fp16_fp32 on gfx950

* fix wrong encoding when hdim is not exponentiation of 2

---------

Co-authored-by: asleepzzz <hanwen.chang@amd.com>

[ROCm/composable_kernel commit: 959df2a155]
2025-09-24 00:59:41 +08:00
Haocong WANG
e28e95529f [CK_TILE] Fix fmha bwd (#2865)
* Fix fmha bwd filter

* remove unnecessary change

* enable test cases

---------

Co-authored-by: Yi DING <yi.ding@amd.com>

[ROCm/composable_kernel commit: 7b16782d7c]
2025-09-23 19:59:27 +08:00
Thomas Ning
e3702467d5 [CK-Tile] Add the API to load SGPR (#2878)
* Have a workable version for SGPR

* have a workable version for atomic add

* Revert "have a workable version for atomic add"

This reverts commit 792377a590c26cfff9c8f545d9a9e8484a7422eb.

* substitute with the new sgpr read api

* update the CHANGELOG

* have a workable version for atomic add

* Revert "have a workable version for atomic add"

This reverts commit 792377a590c26cfff9c8f545d9a9e8484a7422eb.

* change to static for logic

* have a workable version for atomic add

* Revert "have a workable version for atomic add"

This reverts commit 792377a590c26cfff9c8f545d9a9e8484a7422eb.

[ROCm/composable_kernel commit: 2cbbf5dcb3]
2025-09-23 01:23:56 -07:00
Haocong WANG
65e0a25885 [CK_TILE] FMHA FWD bug fix (#2888)
* tempsave debug

* fix the bug in fmha fwd_kernel

* Remove unnecessary changes

* Fix the buggy part

* remove fmha fwd known failure cases

[ROCm/composable_kernel commit: b6e8994386]
2025-09-23 15:00:46 +08:00
Yi DING
01c6567d4c FMHA BWD Avoid SetZero (#2799)
[ROCm/composable_kernel commit: ad259eeae2]
2025-09-23 14:37:48 +08:00
Enrico Degregori
0b149c8695 Wmma support for multiple ABD GEMM (#2803)
* multi_abd wmma support:

 - Add multiple A and B support to multiple D implementation (gridwise level)
 - Add multi_abd GEMM (device level)
 - Add instances (xdl parity)
 - Add tests (both xdl and wmma)
 - Add examples
 - Add ckProfiler support (both xdl and wmma)

* Fix bug in device print function

* Fix unused template parameter

* Fix batched gemm for multiABD gridwise implementation

* Fix gemm_universal_reduce with multiABDs gridwise implementation

---------

Co-authored-by: Illia Silin <98187287+illsilin@users.noreply.github.com>

[ROCm/composable_kernel commit: 3d29bff2f0]
2025-09-22 18:49:06 -07:00
Max Podkorytov
7122e27fd8 fixup build for #2871 when multiple device targets are used (#2885)
[ROCm/composable_kernel commit: de47ae2fdf]
2025-09-22 08:02:41 -07:00
jakpiase
ccd54f7c92 [CK_TILE] Add conv bwd weight two stage support (#2855)
* resolved conflicts

* add conv bwd weight twostage

* fix one file

* fixes after review

* fixes

* fixes

* Fix

---------

Co-authored-by: Bartlomiej Kocot <barkocot@amd.com>

[ROCm/composable_kernel commit: 624c46866e]
2025-09-22 15:31:25 +02:00
Sami Remes
b6681b5519 [CK_TILE] Tensor-wise scaled quant gemm kernel (#2846)
* rename gemm_group_quant to gemm_quant

* Add TensorWise quant mode

* Cshuffle epilogue tests with tensor scaling

* Add tensor quant to example

* Don't use readfirstlane for reading scales - doesn't work for some reason

* Add to changelog

* revert include - from a merge problem?

* revert common.hpp include

* revert host.hpp include

* remove unused utility function

* rename quant pipeline problem

* refactor quant tests

* remove aquant utils

* use TEST_F

* fix all tests by changing gemm config

* Use typed tests

* fix copyright

[ROCm/composable_kernel commit: 4363a82bd6]
2025-09-19 16:52:35 -07:00
Illia Silin
b01ad89040 Revert "[CK_TILE] Add sequence padding and variable length support in fmha (a…" (#2883)
This reverts commit de4ad0db7ed5afd238cc8b56a4ee932c08d1b5fe.

[ROCm/composable_kernel commit: b765fe78f3]
2025-09-19 08:15:02 -07:00
Bartłomiej Kocot
67d51ed019 Disable bwd weight split-k autodeduce for single stage kernels (#2856)
* Disable bwd weight split-k autodeduce for single stage kernels

* update interface tests

---------

Co-authored-by: Illia Silin <98187287+illsilin@users.noreply.github.com>

[ROCm/composable_kernel commit: 29446da1d5]
2025-09-19 16:27:50 +02:00
Yi DING
e984738534 [CK_TILE] FMHA BWD Fix Decode Accuracy (#2881)
* [CK_TILE] FMHA BWD Fix Decode Accuracy

* use s_waitcnt utils

[ROCm/composable_kernel commit: 6cf3fdd21c]
2025-09-19 21:45:02 +08:00
Jeff Huang
f4426fc544 [CK_TILE] Add sequence padding and variable length support in fmha (a… (#2851)
* [CK_TILE] Add sequence padding and variable length support in fmha (and v3)

 - Group Mode Padding: Introduces the `-s_qpad` argument to support
   physically padded layouts. Kernels now use padded start pointers
   (`seqstart_padded_*_ptr`) for memory addressing.

 - Batch Mode Variable Length: Adds `-q_eff_lens` and `-kv_eff_lens`
   arguments for efficient processing of variable-length sequences by
   passing cumulative effective lengths (`cu_seqlen_*_ptr`) to the kernel.

 - FMHA examples: Support padding and variable length both in
   group and batch mode. Dispatcher is updated as well (dispatch to
   kPadSeqLenK enabled pipeline).

 - New padding test cases: Add padding test cases to `smoke_test_fwd.sh`,
   and add benchmarks to `benchmark_fwd.sh` and `benchmark_fwd_v3.sh` as well.
   These test cases and benchmarks that specifically validate/benchmark the
   new padding and variable-length functionalities in both group and batch modes.

* [CK_TILE] Fix build error in fmha unit tests

---------

Co-authored-by: Po Yen Chen <PoYen.Chen@amd.com>
Co-authored-by: Yi DING <yi.ding@amd.com>

[ROCm/composable_kernel commit: 86dd59cd01]
2025-09-19 17:36:49 +08:00
Anton Gorenko
c9bfc1cfd0 [CK_TILE] FMHA Fix synchronization issues in BWD pipelines (#2876)
* Run ctest with --output-on-failure

* Fix synchronization issues in bwd pipelines

The bwd kernel reuses the same area of LDS for ds (SGrad), bias and
dbias (BiasGrad). This means that there must be block_sync_lds between
loading one tensor and storing another to the same area.

Heavy instructions like MFMA/WMMA and global loads are executed between
reuses of the same memory so in MOST cases loading is finished by all
warps before storing is started. However, sometimes warps progress at
different speeds.
Running the tests multiple times and, preferably, with multiple
processes on the same GPU helps to trigger this issue:

bin/test_ck_tile_fmha_bwd_bf16 --gtest_repeat=-1 --gtest_shuffle --gtest_throw_on_failure

[ROCm/composable_kernel commit: 2aec38f9ec]
2025-09-19 11:34:45 +05:00
ltqin
6bc831c573 Add input fp8 and output bf16 attention (#2726)
* change host using fp16 to check

* fp8 to fp8 compare

* rewrite input parameters

* add not squant

* remove some output code

* for scale = 1

* format

* saturates only for fp8

* add fp8bf16 data type

* add fp8bf16 data type

* fix test fp8 code

* add run_fp8bf16_tests

* change fmha fwd example parameter(adding fp8bf16)

* Support fp8bf16 for Aiter

* Support aiter fp8bf16 in c++

* fix comment about fp8 in readme.md

* add fp8fp32

* add fp8fp32 test

* remove range_q etc.

* format

* fix test parameters about squant and fmha example input fp8bf16 fp8fp32 data type

* add fp8bf16 to data_type function

* change colmajor to rowmajor in test_ck_tile_fmha_fwd_fp8

* format

* reset atol for fp8

* fix bug for atol

---------

Co-authored-by: rocking <ChunYu.Lai@amd.com>
Co-authored-by: asleepzzz <hanwen.chang@amd.com>

[ROCm/composable_kernel commit: dd249f1cd6]
2025-09-19 14:26:43 +08:00
Max Podkorytov
ea14551fa9 poc convert fnuz fp8 to non-native dtype similar to ocp (#2871)
[ROCm/composable_kernel commit: e469fee046]
2025-09-18 22:51:01 -07:00
SamiAario-AMD
b1d34eae28 Add gemm weight preshuffle pk_int_t support (#2858)
* Factor out the three separate copies of load_interleaved_pk_type into a common utility class

* Add preprocessing with optional cache flushing and clearing of output for k_batch > 1 to the weight preshuffle GEMM example

* Remove a duplicate function

* Add support for B tensor type pk_int4_t for the weight preshuffle GEMM, with tests included

* I4 support introduced more failing test cases that mirror the existing ones for F8

* Simplify the check for which tests to skip (they all have F8 as A tensor type)

* Add a changelog entry

* add the test for v2 wp pipeline, polish the code, add the support of int4 for v2 wp pipeline

* have a workable version for atomic add

* Revert "have a workable version for atomic add"

This reverts commit 792377a590c26cfff9c8f545d9a9e8484a7422eb.

---------

Co-authored-by: ThomasNing <thomas.ning@amd.com>

[ROCm/composable_kernel commit: 47cd0d5cff]
2025-09-18 21:26:10 -07:00