Gavin Zhao
d1e9058c8e
Add support for RDNA1 GPUs ( #3220 )
...
* Allow compilation for RDNA1 (__gfx101__)
Signed-off-by: Gavin Zhao <git@gzgz.dev >
* More RDNA1 changes
Signed-off-by: Gavin Zhao <git@gzgz.dev >
* Even more RDNA1 changes
Signed-off-by: Gavin Zhao <git@gzgz.dev >
* cmake: skip build quantization for unsupported arches
* add gfx10-1-generic support as well
* add gfx1013 and complete gfx10-1-generic
* fix clang format
* enable DL kernels on gfx101x
---------
Signed-off-by: Gavin Zhao <git@gzgz.dev >
Co-authored-by: illsilin_amdeng <Illia.Silin@amd.com >
Co-authored-by: Illia Silin <98187287+illsilin@users.noreply.github.com >
2026-03-24 16:44:41 -04:00
Bartłomiej Kocot
cc98a41f46
Fix Add in dynamic buffer for fp32/i8 ( #2351 )
...
* Fix Add in dynamic buffer for fp32/i8
* fixes
* Fix
2025-06-17 22:25:56 +02:00
Satyanvesh Dittakavi
4c57157d50
Do not use warpSize as compile time constant as it is removed ( #2320 )
...
* Do not use warpSize as compile time constant as it is removed
* Update tile_image_to_column_shape.hpp
update warpSize usage.
* clean-up all use of warpSize, make sure code builds
* fix
---------
Co-authored-by: Illia Silin <98187287+illsilin@users.noreply.github.com >
Co-authored-by: illsilin <Illia.Silin@amd.com >
Co-authored-by: Bartlomiej Kocot <barkocot@amd.com >
2025-06-17 11:54:30 -07:00
Bartłomiej Kocot
f6c2ff9dce
Grouped convolution forward with clamp ( #2334 )
...
* Grouped convolution forward with clamp
* Optimize clamp
* unary fixes
* test gk bias
* Revert "test gk bias"
This reverts commit 8e42e29d7b .
* Revert "Revert "test gk bias""
This reverts commit e73c0550ce .
* workaround comment
2025-06-16 15:36:53 +02:00
Bartłomiej Kocot
bb4f471b09
Grouped conv bwd weight with grouped gemm ( #2304 )
...
* Grouped conv bwd weight with grouped gemm
* fixes
* fix
* Fixes
* test comments
* restore atol
* fix
2025-06-12 10:15:07 +02:00
Yi DING
37554c31e8
Add MoE & FP8 Blockscale WP Kernels for GFX950 ( #2297 )
...
* [fix] align v3 gufusion pipeline
* fix device kernel selection.
* Add .co direct asm support by CK_USE_ASM_MOE_STAGE2_BLOCKSCALE
* experimental optimization for scale load in blkscale gemm
* Add asm for no-loop v3_128x128x128
* fix bugs
* tune fp8 example
* Update v1_128x128x128 to 2x2 instead of 4x1
* wip
* add warmup to asm launch
* wip2
* 16x16 function merged to moe
* temp save, a performant version.
* wip3
* Update .co binary to 16x16
* 16x16x128 correct; 64x64x128 failed
* update
* use mem_op::set when topk=1
* add mx fp8 b_preshuffle support, function not yet tested.
* Spilt the fp4 target. Fix the known bugs. 128x128x128 sanity checked; remove prints
* some fixes
* fix update
* remove some unnecessary hacky; enable 256x256x256 tilesize
* update for function debug
* Add pipeline v3. Have some runtime issue and register spill
* Fix pipe v3 correctness issue
* remove unnecessary hacky
* clang format
* fix a bug
* fix the bug, functional test passed
* tempsave; buggy at passed 4 e8m0 to scaled mfma
* added fp4_bpreshuffle example, build failures
* fixed some bugs
* implement shuffled scale mxfp4gemm, blocker: opsel not effect
* hotfix
* fix bugs, build passed
* (M, N, K)=(128, 128, 128) function failed.
* temp save for gemm1. Function not ready
* fix compile error. Gemm2 pass. Gemm1 WIP
* fix bug for a lds read
* update moe
* Compile pass. Gemm1 function WIP
* update moe
* fix fp8; fix even/odd
* tempsave
* update moe
* Revert "update"
This reverts commit 960b2bce1c .
* Revert "use mem_op::set when topk=1"
This reverts commit def952a178 .
* Add v3 128x128x128_4x4_16x16.co for gfx950
* temp cmake flag suppression for aiter test
* add code for mxfp4 gemm, blockscale not supported yet
* gemm1 up-only pass. GU WIP
* function pass with inline asm hacky
* revert unexpected file change
* updated and build passed
* update CE elementOP
* added code for debug
* Gemm1 GUFusion function pass. Perf WIP
* Fix fp8/bf8; remove duplicated code
* disable the scheduler in v3; bring it back when compiler feature ready.
* update moe v1 pipeline
* Add gemm1 v1 32x128x128
* remove schedule barrier
* updated
* Fix fp8/bf8 B-row
* mfma using asm, device result correct, host result need to check
* gemm1 v3 64x128x128 debug
* fix cpu ref
* a/b thread_desc stride fix
* Use random scale for init1
* 16x16x128 input size blockscale function passed
* fix blockscale gemm bug
* tempsave. Almost all instances passed.
* v1 fix for mi350.
* temp save
* debug save
* update debug
* fix the bug, 128x128x256 tile function passed
* v3
* rename moe block selector and pipeline
* Add gemm1 v1
* Add gemm1 v1 to selector
* added mx moe block v3 support, function passed
* compile error fix
* Improve the pipeline
* Pack e8m0 as int32_t
* v1 compile pass. Function not ready
* debug synchronize issue over different GPU/ROCm
* minor fix
* Add profiler filter
* Add f4 ckProfiler
* Fix example compile error
* Add f4 profiler examples
* tempsave
* v1 function pass.
* v3 function pass
* align file and function name
* mx_moe_fp4 ready for aiter with clang-format.
* modify the way we represent fp4
* generalize the pipeline scheduling.
* init moe mx f4 scale shuffle
* Cmakelist diable compiler-bound flags
* mx_fp4 default parameter change
* Moe blockscale gemm1&gemm2 asm support for aiter. Suppression cmkae flag til new compler.
* update code
* tempsave; modify the way we represent fp4
* generalize the pipeline scheduling.
* Add gemm1 gfx942 .co support
* updated code, build passed.
* Update gemm2 asm with latest compiler flag
* Fix mx f4 ckProfiler
* Fix blockwise gemm mx v1
* lds conflict free + buffer load lds
* Add gemm2 v3 64x128x128
* fix a, b scale loading bugs, a, b scale loading now correctly
* Add gemm2 v3 64x128x128
* commit with debug info
* fix fp4 profiler
* Add mx fp4 pileline v1 instances
* Fix v2 topk_weight cal. Add silu asm.
* v2 tok_weight WIP
* init mx fp4 B no preshuffle version
* tempsave. compile pass, function wrong
* enable fp4 moe no weigth preshuffle, function pass
* update the TFlops calculation in the example
* Add gemm2 64x128x128 asm. Fix BF16 ref.
* fix 2 typos in fp4_preshuffle
* Better kernel selection in device classes
* correct preShuffleBuffer
we should used packed k to do shuffle.
* lds conflict free + buffer load lds
* optimize offset math in dma
* Fix fp4 ckProfiler
* Fix MX MFMA tests
* fix f4 pipeline issues
* gemm1 func pass
* update mx moe gemm1_bns tile size to 64x128x256
* update mx moe gemm1 gemm2 TF and BW calculation
* fix typo
* temp save
* Fix example_gemm_mx build
* rename the block pipeline
* correct a typo in tail
* Add rotating to mx examples
* fix the correctness issue
* Fix v1; use M padding
* Add NT flag to B/BScale buffer
* Merge gemm_mx_common.hpp
* temp save, 4.4~4.5
* Fix 'Merge gemm_mx_common.hpp'
* refactor the pipeline
* Pad the M for scale buffer unconditionaly
* update MX moe GEMM1 hotloopscheduling
* change the gemm1 tile from 64x128x128 to 128x64x128
* Unconditional Ascale padding
* Pad shuffled a scale only
* pad ascale
* add vmcnt guard for async copy
* Profiler add f4 wp
* Merge preshuffle device
* Add more fp4 wp instances
* Fix do_weight in gemm1. Fix cshuffle_datatype. Clang-format
* Clang-format after 2 merges
* Remove rocm6.3 workaround flags and macro
* Fix fp8 config
* Fix bf8 config
* flag and barrier fix for copmiler branch MainOpSelV3
* Add fp8 profiler instances
* Remove debug infos; Enable flags for blockscale f8
* No asm ver. for merging moe blocksale fp8 into mainline
* update the flag name for f8blockscale
* recover example
* fix performance bug of bpreshuffle f8 gemm
* clang format, remove single rate mfma restriction for f8
* remove single rate mfma restriction for f8 blockscale gemm
* Fix moe blockscale gemm1 barrier 0x800 for new compiler
* add pipeline v1 for MOE Gemm2
* Use v1 pipeline for example_moe_gemm2_xdl_mx_fp4_bns
* Fix OOB; add MB96 instances
* remove unnecessary files
* fix the cmake issue
* Enable splitk for mxfp4; clang format;
* Generate random tensor values with multiple threads
* Use packed_size_v for A/BPackedSize
* Fix warning
* Fix target_compile_options for disabled target on gfx942
* fix moe pki4 on gfx950
* doc the kGroup definition
* Fix ThreadwiseTensorSliceTransfer_v4::Run (Fuse scale)
* Refactor thread_copy_lds_direct_load; fix gfx942 direct lds load example; fix f16_pki4 example
* Fix unknown compiler flag
* fix two failed examples.
* fix some failure tile size in gfx950 universal gemm. fix test_gemm_fp16
* workaround fix for test_gemm_f32; * We have very limited support for lds direct load if input matrix is not K major
* fix test_gemm_splitk;
* Fix compile for mx_mfma_op
* add mfma selection logic for multipled_v3
* Clean up
* Fix device gemm mx link error
* improve the global atomic pattern
* Revert unnecessary copyright updates
* restore minimum_occupancy logic
* Avoid data race in moe gemm2 ref
* Build fp8 gemm_multiply_multiply and moe only on gfx94/95
* update the instance in device_mx_gemm
* Resolve comments
* Copyright 2025
* Remove unused code
* fix library linking issue
---------
Co-authored-by: OscarXu <huaiguxu@amd.com >
Co-authored-by: lalala-sh <Jiaxing.Wen@amd.com >
Co-authored-by: mtgu0705 <mtgu@amd.com >
Co-authored-by: aska-0096 <haocwang@amd.com >
Co-authored-by: Your Name <you@example.com >
Co-authored-by: valarLip <340077269@qq.com >
Co-authored-by: feifei14119 <feiw@amd.com >
Co-authored-by: Lin, Qun <qlin@amd.com >
Co-authored-by: Andriy Roshchenko <andriy.roshchenko@amd.com >
Co-authored-by: joye <joye@amd.com >
Co-authored-by: asleepzzz <hanwen.chang@amd.com >
2025-06-12 09:25:59 +08:00
Bartłomiej Kocot
8c1ed6f4c1
Move SetZero functions inside the kernels for Grouped Conv ( #2255 )
...
* Disable SetZero before launch kernel for grouped conv fwd
* Move set zero to kernel
* wmma fix
* fix
---------
Co-authored-by: BrianHarrisonAMD <169072757+BrianHarrisonAMD@users.noreply.github.com >
2025-06-11 23:41:03 +02:00
Muhammed Emin Ozturk
6fad1c4874
Stream-K Reduction option as Runtime parameter and Compilation Error Fix (SK- Reduction) ( #2145 )
...
* reduction is passed as runtime parameter
* clang
* Update include/ck/tensor_operation/gpu/device/impl/device_gemm_xdl_cshuffle_streamk_v3.hpp
Co-authored-by: John Afaganis <john.afaganis@amd.com >
* Update include/ck/tensor_operation/gpu/grid/block_to_ctile_map.hpp
* remove comment
---------
2025-06-11 10:59:44 -07:00
Bartłomiej Kocot
7a83f1d510
Grouped conv bwd wei explicit GEMM for odd C/K ( #2306 )
2025-06-10 11:17:12 +02:00
Bartłomiej Kocot
050cad09b5
Grouped Convolution Backward Weight Explicit GEMM ( #2282 )
...
* Grouped conv bwd weight explicit gemm
* 3d
* cmake fixes
* fix test
* fix
2025-06-06 10:30:08 +02:00
Andriy Roshchenko
00247e3c29
Optimized GEMMs for MX FP4/8 ( #2294 )
...
Adds V3 GEMM pipeline for MX FP4 and MX FP8
Adds V3 GEMM pipeline for MX FP4 with preshuffling
Adds MXFP4 GEMM tests (#2275 )
Adds MXFP4 GEMM examples
Adds MXFP4 GEMMs to ckProfiler
Co-authored-by: Andriy Roshchenko <107577548+andriy-ca@users.noreply.github.com >
Co-authored-by: Andriy Roshchenko <andriy.roshchenko@amd.com >
Co-authored-by: aska-0096 <haocwang@amd.com >
Co-authored-by: lalala-sh <Jiaxing.Wen@amd.com >
Co-authored-by: OscarXu <huaiguxu@amd.com >
Co-authored-by: mtgu0705 <mtgu@amd.com >
Co-authored-by: Ding, Yi <yi.ding@amd.com >
Co-authored-by: feifei14119 <feiw@amd.com >
Co-authored-by: Lin, Qun <qlin@amd.com >
Co-authored-by: joye <joye@amd.com >
Co-authored-by: Rostyslav Geyyer <46627076+geyyer@users.noreply.github.com >
2025-06-05 13:54:15 -06:00
Anton Gorenko
52b4860a30
WMMA GEMM universal pipeline v1, mixed precision and paddings, examples ( #2230 )
...
* Fixed cmake errors related to gemm_bilinear. Previously, if the above flags are set, cmake build fails: GPU_TARGETS="gfx1100;gfx1201" -D DTYPES="fp16;bf16;fp8"
* Fixed cmake build errors related to test_fp8
* Updates to support mixed precision
* Adding support for RRR, F8xF16xF16 gemm_universal_wmma - wip
* Added support for F8xF16xF16 to gemm_wmma_universal
* Added support for F16xF8xF16 to gemm_wmma_universal
* Added support for BF16xI4xBF16 to gemm_wmma_universal
* Added support for F16xI4xF16 to gemm_wmma_universal
* Fixed IsSupportedArgument to check ComputeTypeA, ComputeTypeB instead of ADataType, BDataType
* Added missing test class for FP16_KM_NK
* Pre-commit hooks fixes
* Added padding instances for f16xf16xf16
* Fixed cmake errors related to gemm_bilinear. Previously, if the above flags are set, cmake build fails: GPU_TARGETS="gfx1100;gfx1201" -D DTYPES="fp16;bf16;fp8"
* Fixed cmake build errors related to test_fp8
* Ammending changes for adding support for padding instances for f16xf16xf16
* Fixes for padding instances for f16xf16xf16
* Added padding instances for bf16xbf16, f8xf8
* Added packed instances for bf16xi4xbf16
* Added padding instances for f8xf16xf16
* Added padding instances for f16xf8xf16, f16xi4xf16
* Fixed typos for bf16xbf16xbf16 padding instances
* Fixed typos for padded instances
* Added tests for fp16, KM_KN and KM_NK
* Padding not supported for when BDataType is pk_i4_t. Added fix for correct check and removed padding instances.
* Fixed typos
* Updated the set of tests for FP16
* Updated the set of tests for FP16
* Fix typo
* Moved f16xi4 test under the correct data layout group
* example for gemm_universal_bf16
* Adding examples for gemm_wmma instances
* Added the missing parameters
* Fixed review comments and added executable to cmakeLists
* Fixing clang format
* Fixing build erros
* Fixed compilation failure.
* Modified some code as per gemm_universal_examples
* Fixed the gemm specialization error
* Fixed the build errors.
* Fix strides of a/b_thread_desc
The descriptors are larger than needed (even though the compiler don't alloc registers for unused values).
* Load in M/NRepeat dims with thread copy's slice instead of a loop
* Clone BlockwiseGemmXdlops_pipeline_v1 for WMMA implementation
* Implement Intrawave and Interwave variants of pipeline v1
* Add instances for Interwave and Intrawave v1
* Add instances with ABlockLdsExtraM and BBlockLdsExtraN = 0
* Remove instances that are too slow (mostly because of register spilling)
* Add a workaround for fp8/bf8->f32 packed conversion issue
* Add instances for Interwave and Intrawave v1
* Enable profiling of mixed precision with f8 and int4 on WMMA
* Fix segfault in profiler when B is pk_i4_t
b_device_buf's size in bytes is larger than b_k_n_permute so b_device_buf.ToDevice reads out-of-bounds.
* Remove instances that are too slow (mostly because of register spilling)
* Add missing add_device_gemm_wmma_universal_f8_f8_bf16 declarations
* Add test case for bf16_i4
* Add missing Regular tests
* Add test_gemm_universal_xdl/wmma_fp16 to REGRESSION_TESTS
They take more than 30 seconds
* Fix a bug that fp16_i4 validation passes only with PermuteB
A permutation required by conversion from pk_i4_t to half_t does not
depend on PermuteB, they can be used independently.
* Use PermuteB with f16_i4 in most instances (as xdl)
Some instances use PermuteB = false for checking correctness.
See also the previous commit.
* Fix cache flushing for pk_i4
* Add mixed precision examples
* Disable all tests and instances with f8 on gfx11
Even though f8_f16 and f16_f8 don't require f8 WMMA instructions,
gfx11 still lacks hardware instructions for fast f8->f32 conversion.
* Add FP16 KM_NK and KM_KN test suites for XDL
These tests were added to common .inc for better testing of WMMA instances
* Fix int8 DTYPES check for gemm_bilinear
---------
Co-authored-by: Anca Hamuraru <anca@streamhpc.com >
Co-authored-by: Apoorva Kalyani <apoorva@streamhpc.com >
2025-06-04 12:22:33 +06:00
Bartłomiej Kocot
6e5acee0f9
Add Clamp/Relu bf16/fp16 cast fixes ( #2279 )
...
* Add Clamp/Relu bf16/fp16 fixes
* fix
2025-06-03 18:31:46 +02:00
Bartłomiej Kocot
e7906dd644
Change relu to clamp for grouped conv fwd instances ( #2249 )
2025-05-29 00:51:25 +02:00
Bartłomiej Kocot
037764bbc6
Fix grid size calc for bwd wei ( #2226 )
2025-05-26 16:51:09 +02:00
Thomas Ning
1386924749
Add the instances for small sized GEMM in preshuffle and improve CMake Flag ( #2212 )
...
* Add small instance, add the bug fix, & improve the example CMake
* clang format
2025-05-20 15:05:08 -07:00
jefyang1
f18170064d
Use new mfma instructions for FP8 on gfx950 ( #2202 )
...
* Add logic to use new mfma instructions for fp8 bf8
* Fix example_gemm_xdl_fp8_pk_i4_bpreshuffle_v3 on gfx950 and run clang format
* Update include/ck/tensor_operation/gpu/warp/xdlops_gemm.hpp
Co-authored-by: Andriy Roshchenko <107577548+andriy-ca@users.noreply.github.com >
* Fix intrin_mfma f8 calls due to merge mistake
---------
Co-authored-by: Andriy Roshchenko <107577548+andriy-ca@users.noreply.github.com >
2025-05-19 17:29:51 -07:00
arai713
5b3430b868
Narrowing error fix for codegen compilation ( #2194 )
...
* removed comment with special characters
* fix for arg/template change after merge from develop
---------
Co-authored-by: Thomas Ning <Thomas.Ning@amd.com >
2025-05-16 11:11:54 -07:00
Mateusz Ozga
fa3c6811d8
Disable conv for Filter1x1Stride1Pad0 when K or C is even ( #2186 )
2025-05-16 10:18:47 +02:00
Bartłomiej Kocot
c53b7bd22e
Switch to v2 pipeline for grouped conv bwd data ( #2181 )
...
* Change to old pipeline for grouped conv bwd data
* fix
* fix
* fix
* fix
* fix
* fix
* Fix
2025-05-13 10:14:30 +02:00
Thomas Ning
b49f7de81f
Improve the general performance of the Preshuffled GEMM V3 & delete the unnecessary instances ( #2166 )
...
* make the work compiled
* Solved the example code, but still have the profiler error
* Finished the feature
* Clang format and update the CHANGELOG
* solve the preshuffle v1 & v2 problem
* Comment Addressed
* Comment Addressed
2025-05-12 09:52:58 -07:00
Bartłomiej Kocot
6fddb5708c
Add grouped conv fwd bias relu instances ( #2179 )
...
* Add grouped conv fwd bias relu instances
* fixes
* fix
2025-05-09 22:52:34 +02:00
jefyang1
6b1a339b6f
Fix grouped conv bwd data tests on gfx950 ( #2173 )
2025-05-09 09:01:06 -07:00
Andriy Roshchenko
cb27e7c77f
Ensure MX GEMM Instances can be Cross-Compiled for Multiple Architectures ( #2171 )
...
* Re-enable MX GEMM instances
* Fix compilation error when building MX GEMM for multiple architectures
2025-05-08 13:26:03 -06:00
Andriy Roshchenko
79b0bfeb41
MX GEMM - Add FP8 GEMM Tests for Different Layouts ( #2152 )
...
* Add gemm_mx_fp8_bf8 example with row-major B
* Add more overloads of MX MFMA instructions
* Add MK_KN (RRR) tests
* Add KM_NK (CCR) tests
* Add more problem sizes to Large tests
* Add test_gemm_mx to the list of regression tests
2025-05-01 11:55:48 -06:00
Bartłomiej Kocot
4094ad158a
Integrate universal gemm with conv bwd data and add SplitK ( #1315 )
...
* Integrate universal gemm with conv bwd data
* Fix multi d kernel
* Add splitK support
* instances refactor
* instances refactor
* refactor
* fixeS
* fixes
* 16x16 instnaces
* Fixes
* Fix
* Fix
* Fix
* Fix
* Fix
* Fixes
* fix
* fix
2025-04-28 23:54:49 +02:00
Anton Gorenko
edd92fc546
DeviceGemm_Wmma_CShuffleV3 with BlockGemmPipelineVersion::v3 ( #2096 )
...
* Prepare files for DeviceGemm_Wmma_CShuffleV3
* Implement main part of CShuffleV3 with block pipeline v3 for WMMA
* Remove unused functions and template params for A/B descriptors
* Support both gfx11 and gfx12
* Enable SplitK for gfx12 and disable for gfx11
* Added RowColRow layout for DeviceGemmV2 fp16
* Added more instances for Row, Col, Row data layout
* Added instances for DeviceGemm_Wmma_CShuffleV3, Col, Row, Row data layout
* Added instances for DeviceGemm_Wmma_CShuffleV3, Col, Col, Row data layout
* Added more instances for DeviceGemm_Wmma_CShuffleV3, Row, Row, Row data layout
* Fix formatting
* Add documentation
Based on e5ad48a784
* Enable gemm_universal profiling for gfx11/12
* Add WMMA intrinsics for F8/BF8
* Support F8/BF8 DeviceGemm_Wmma_CShuffleV3, add basic instances
* Add BF16 instances and tests
* Fix test_gemm_universal_wmma_fp8 by adding CK_USE_WMMA_FP8
---------
Co-authored-by: Anca Hamuraru <anca@streamhpc.com >
2025-04-28 10:14:21 +05:00
lalala-sh
39ba03f25d
Moe gemm activation ( #2026 )
...
* fix useless code and remove usless oob
* clang format
* fix coredump in e2e test
* fix2
* fix clang format
* fix output oob
* impl int64 but result not correct
* int64 index ok now
* input output all ok
* fix uint32
* revert v1 test
* use uint32
* mork to support 13w tokens
* moe sorting fix moebuf
* fix merge
* update moe api fix aiter build
* fix buid
* fuse silu
* silu ok
* acale ok
* add silu
* change code
* gemm2 ok
* gufusion compatible ok, fix warnings
* gu fusion for m32 m64 ok
* support bf16 cshuffle
* i4 gemm2 ok
* i4 gemm2 ok and i4 gemm1 build
* 16x16 run ok
* change flops; change cshuffle dtype
* fuse gelu silu act in moe gemm1
* fp8 with act ready
* int4 act ready
* remove useless changes
* remove useless code change
* fix clang format
* add the arch limit of int4 moe gemm
* fuse moe activation
* fix fp8 16x16
* fix no quant case
* fix bugs
* fix fp8 gufusion bug
* remove useless comments
* refine activation code & complete moe example
* fix int8 bugs
* merge tkw1
---------
Co-authored-by: coderfeli <coderfeli@163.com >
Co-authored-by: feli <felix.li@amd.com >
Co-authored-by: illsilin <Illia.Silin@amd.com >
Co-authored-by: root <root@hjbog-srdc-51.amd.com >
Co-authored-by: Illia Silin <98187287+illsilin@users.noreply.github.com >
2025-04-23 10:35:34 +08:00
Muhammed Emin Ozturk
b092c18da7
MI308 fix for streamk 1-Tile floating point exception ( #2101 )
2025-04-21 11:44:07 -07:00
lalala-sh
bcf5bb41be
enable do top k weights in moe stage1 gemm ( #2094 )
...
* add switch for mul topk weights
* fix bf16/f16 bugs
* complete
2025-04-18 10:45:49 +08:00
Andriy Roshchenko
213b203a3c
MX GEMM - Parameterized Test Template ( #2088 )
...
* Tests for MX FP8 GEMM
* Improve documentation
2025-04-16 19:56:00 -06:00
Andriy Roshchenko
da54464cce
MX GEMM - Add MX BF8 example ( #2071 )
...
* Add MX GEMM example for MX BF8
* Verified MX FP8 with 16x16x128 scale builtin
* Verify MX BF8 GEMM with BF16 output
2025-04-16 15:25:02 -06:00
Andriy Roshchenko
7106976a72
MX GEMM - New GEMM pipeline for MX data types ( #2059 )
...
* Allow selection of mfma_scale instructions
* Read B tensor from LDS to VGPR in chunks of 16 in MFMA order
* Add constexpr and synchronize return type for `get_exponent_value`
* Pass scales by reference and add comments to `mfma_scale_f32_32x32x64`
* Add support for microscaling instructions in `XdlopsGemm`
* Fix `mfma_scale_f32_16x16x128f8f6f4` wrapper
* Remove software implementation of MX GEMM
* Make interface of `intrin_mfma_scale_f32_16x16x128f8f6f4<16, 16>` consistent with the other scale instruction
* Update README
* Updated CHANGELOG
* Remove unused static methods
2025-04-15 17:17:07 -06:00
Mingtao Gu
56378f810f
CK pk_i4_t test failures fix (SWDEV-518629) ( #2075 )
...
* fix pk_i4_v3 tests failures in Unbuntu env.
* fix pk_i4_t tests failure on Unbuntu issues.
* some fixed.
---------
Co-authored-by: mtgu0705 <mtgu@amd.com >
2025-04-14 16:58:57 +08:00
Illia Silin
0d4f145078
Fix build issues for multiple targets. ( #2077 )
...
* build for multiple targets on gfx942
* add missing ignore statements
2025-04-11 12:12:53 -07:00
valarLip
2c563fecf7
add passthrough for int32->float32 ( #2062 )
2025-04-08 15:16:30 -07:00
Illia Silin
572cd820ce
Split env.hpp header from the ck.hpp header. ( #2049 )
...
* split env.hpp out of main headers
* fix namespace logic
2025-04-03 15:30:21 -07:00
aledudek
9329432f6c
Post-merge changes for fully async args copy in ck grouped gemm ( #1991 )
...
* Post-merge changes for fully async args copy in ck grouped gemm
* Post-merge documentation and naming changes
* Build fix and updated changelog
* Revised comments
2025-04-03 13:35:43 +02:00
Bartłomiej Kocot
2ccf914888
Add support for GKCYX grouped conv weight ( #2023 )
...
* Grouped conv bwd weight GKCYX support
* fix and changelog
* fix
* fix
* fixes
* comments
* fix
2025-04-02 23:59:49 +02:00
Adam Osewski
e5ad48a784
Basic docs for universal gemm & ck-tile gemm. ( #2014 )
...
* Basic docs for universal gemm & ck-tile gemm.
* Update include/ck/tensor_operation/gpu/device/impl/device_gemm_xdl_cshuffle_v3.hpp
Co-authored-by: Bartłomiej Kocot <barkocot@amd.com >
* Update include/ck/tensor_operation/gpu/grid/gridwise_gemm_xdl_cshuffle_v3.hpp
Co-authored-by: Bartłomiej Kocot <barkocot@amd.com >
* Update include/ck/tensor_operation/gpu/device/impl/device_gemm_xdl_cshuffle_v3.hpp
Co-authored-by: Bartłomiej Kocot <barkocot@amd.com >
* Update include/ck/tensor_operation/gpu/grid/gridwise_gemm_xdl_cshuffle_v3.hpp
Co-authored-by: Bartłomiej Kocot <barkocot@amd.com >
* Update include/ck/tensor_operation/gpu/device/impl/device_gemm_xdl_cshuffle_v3.hpp
Co-authored-by: Bartłomiej Kocot <barkocot@amd.com >
* Update include/ck/tensor_operation/gpu/grid/gridwise_gemm_xdl_cshuffle_v3.hpp
Co-authored-by: Bartłomiej Kocot <barkocot@amd.com >
* Update include/ck/tensor_operation/gpu/device/impl/device_gemm_xdl_cshuffle_v3.hpp
Co-authored-by: Bartłomiej Kocot <barkocot@amd.com >
* Update include/ck/tensor_operation/gpu/device/impl/device_gemm_xdl_cshuffle_v3.hpp
Co-authored-by: spolifroni-amd <Sandra.Polifroni@amd.com >
* Update include/ck/tensor_operation/gpu/grid/gridwise_gemm_xdl_cshuffle_v3.hpp
Co-authored-by: Bartłomiej Kocot <barkocot@amd.com >
* Update include/ck/tensor_operation/gpu/grid/gridwise_gemm_xdl_cshuffle_v3.hpp
Co-authored-by: Bartłomiej Kocot <barkocot@amd.com >
* Update include/ck/tensor_operation/gpu/grid/gridwise_gemm_xdl_cshuffle_v3.hpp
Co-authored-by: Bartłomiej Kocot <barkocot@amd.com >
* Update include/ck/tensor_operation/gpu/device/impl/device_gemm_xdl_cshuffle_v3.hpp
Co-authored-by: Bartłomiej Kocot <barkocot@amd.com >
* Update include/ck/tensor_operation/gpu/device/impl/device_gemm_xdl_cshuffle_v3.hpp
Co-authored-by: spolifroni-amd <Sandra.Polifroni@amd.com >
* Update include/ck/tensor_operation/gpu/device/impl/device_gemm_xdl_cshuffle_v3.hpp
Co-authored-by: Bartłomiej Kocot <barkocot@amd.com >
* Update include/ck/tensor_operation/gpu/device/impl/device_gemm_xdl_cshuffle_v3.hpp
Co-authored-by: spolifroni-amd <Sandra.Polifroni@amd.com >
* Update include/ck/tensor_operation/gpu/device/impl/device_gemm_xdl_cshuffle_v3.hpp
Co-authored-by: spolifroni-amd <Sandra.Polifroni@amd.com >
* Update include/ck/tensor_operation/gpu/device/impl/device_gemm_xdl_cshuffle_v3.hpp
Co-authored-by: spolifroni-amd <Sandra.Polifroni@amd.com >
* Update include/ck/tensor_operation/gpu/device/impl/device_gemm_xdl_cshuffle_v3.hpp
Co-authored-by: spolifroni-amd <Sandra.Polifroni@amd.com >
* Update include/ck/tensor_operation/gpu/device/impl/device_gemm_xdl_cshuffle_v3.hpp
Co-authored-by: spolifroni-amd <Sandra.Polifroni@amd.com >
* Update include/ck/tensor_operation/gpu/device/impl/device_gemm_xdl_cshuffle_v3.hpp
Co-authored-by: spolifroni-amd <Sandra.Polifroni@amd.com >
* Update include/ck/tensor_operation/gpu/device/impl/device_gemm_xdl_cshuffle_v3.hpp
Co-authored-by: spolifroni-amd <Sandra.Polifroni@amd.com >
* Reviewers suggestions.
* Align tparam names in doc with class tparams.
* More reviewers fine tuning ;)
---------
Co-authored-by: Bartłomiej Kocot <barkocot@amd.com >
Co-authored-by: spolifroni-amd <Sandra.Polifroni@amd.com >
2025-04-02 11:03:40 +02:00
Bartłomiej Kocot
8c0ab61ece
Grouped conv backward data GKCYX support ( #2029 )
...
* Grouped conv backward data GKCYX support
* profiler
* Converter
* split instances
2025-04-01 13:24:38 -07:00
Bartłomiej Kocot
ec742908bd
Grouped conv fwd v3 fix for SplitN an G > 1 ( #2038 )
...
* Grouped conv fwd v3 fix for SplitN an G > 1
* Remove int8 large test
* Retore int8 test
2025-04-01 13:19:35 -07:00
Bartłomiej Kocot
54c81a1fcf
Add support for GKCYX grouped conv fwd ( #2015 )
...
* Add support for GKCYX grouped conv fwd
* fixes
* fix
* changelog
* Fixes
2025-03-26 21:13:38 +01:00
Andriy Roshchenko
6660dc6b8e
Introduce MX GEMM for FP8 data type ( #2000 )
2025-03-24 15:41:07 -06:00
Bartłomiej Kocot
5b0873c31a
Fix split N for large images in groupd conv fwd ( #2004 )
...
* Fix split N for large images in groupd conv fwd
* Fix comments
2025-03-22 23:19:49 +01:00
felix
7eaedeb36c
Ck moe hot fix ( #1979 )
...
* fix useless code and remove usless oob
* clang format
* fix coredump in e2e test
* fix2
* fix clang format
* fix output oob
* clang format
* rm useless comments
---------
Co-authored-by: coderfeli <coderfeli@163.com >
Co-authored-by: illsilin <Illia.Silin@amd.com >
2025-03-19 22:58:27 +08:00
aledudek
5095906975
Async grouped gemm v3 ( #1940 )
...
* Fully async grouped gemm
* Remove commented code
* Remvoe maybe_unused
* host kernel args
* Checkpoint segfault debugging...
* Working part1
* Working part2
* Remvoe comments...
* Use void ptr for gemm kernel host args
* Fix device_grouped_gemm_multiple_d_dl build issue
* Fix device_grouped_gemm_xdl build issue
2025-03-17 16:42:43 +01:00
Bartłomiej Kocot
c2e4898b4b
Grouped conv bwd data NGCHW ( #1967 )
...
* Grouped conv bwd data NGCHW
* fixes
* fix
* Improvements
* Fix
* Fix
* add client example
2025-03-17 13:32:00 +01:00
feli
251afab3b7
ck_moe: fix useless code and remove usless oob ( #1972 )
...
* fix useless code and remove usless oob
* clang format
---------
Co-authored-by: coderfeli <coderfeli@163.com >
2025-03-12 09:22:42 -07:00
Illia Silin
4c97cc511e
use old instrinsics with staging compiler ( #1970 )
2025-03-12 07:29:09 -07:00