Changed the include order of the new WMMA/MFMA unification
framework (#5241)
Those changes are to fix the include order and make header files
independent of one another. Also the `remod.py` sript has run and
changed the `grouped_convolution.hpp` and `core.hpp` files.
## Motivation
Some headers appear to depend on include order.
For example, when moving `#include "wmma/wmma.hpp"` in
[amdgcn_mma.hpp](https://github.com/ROCm/rocm-libraries/blob/develop/projects/composablekernel/include/ck_tile/core/arch/mma/amdgcn_mma.hpp)
later in the include list, it is causing compilation errors. Also the
pre-commit script `remod.py` is shuffling includes to be in alphabetical
order and is causing compilation issues.
Expected behaviour:
Headers should be independent of one another: no header should require
another to be included first. Each header should compile correctly on
its own.
## Test Plan
The CI (that runs `remod.py`) should compile.
## Test Result
Existing CI should compile and be green.
## Submission Checklist
- [x] Look over the contributing guidelines at
https://github.com/ROCm/ROCm/blob/develop/CONTRIBUTING.md#pull-requests.
[CK TILE] Refactor sequence_reverse_inclusive_scan
## Proposed changes
Refactor ck tile `sequence_reverse_inclusive_scan` from recursive to
for-loop.
Tracking issue: #4229
This pull request introduces a new lightweight array type,
`static_array`, and refactors the sequence utilities to use it for
improved constexpr support and simplicity. The changes also include
updates to the build system to add container-related tests.
**Core Library Improvements:**
* Added a new header `static_array.hpp` that defines the `static_array`
type, a constexpr-friendly array with basic accessors and no custom
constructors.
* Updated includes in `core.hpp` and `sequence.hpp` to import
`static_array`.
[[1]](diffhunk://#diff-14b406eccf59794051a16c0c9c1a7e11234324bfdd107a5bbe0f173cd25bcddcR44)
[[2]](diffhunk://#diff-5042e5b47bb2ba78bbab2d284338cf0503bc8fb76a7d631cc2684ad6ca832a76R7)
**Refactoring to Use `static_array`:**
* Refactored sequence utilities in `sequence.hpp` to use `static_array`
instead of the previously forward-declared `array` type, including in
histogram and array generation logic.
[[1]](diffhunk://#diff-5042e5b47bb2ba78bbab2d284338cf0503bc8fb76a7d631cc2684ad6ca832a76L1108-R1133)
[[2]](diffhunk://#diff-5042e5b47bb2ba78bbab2d284338cf0503bc8fb76a7d631cc2684ad6ca832a76L1130-R1146)
* Rewrote the implementation of `sequence_reverse_inclusive_scan` to use
`static_array` for intermediate storage, improving constexpr evaluation
and clarity.
**Build System and Testing:**
* Added a new test subdirectory for container tests and a GoogleTest
executable for `unit_sequence.cpp` to the CMake build configuration.
[[1]](diffhunk://#diff-5d35ff7555d3f0b438d45cde06b661eb1332cdbec66287ac7ec3c478d688aae5R5)
[[2]](diffhunk://#diff-1f54f0d2b431b7fc74f7b4ffb66e80c381c904c3383b1d27987467e3482d6d7aR1-R7)
Co-authored-by: Illia Silin <98187287+illsilin@users.noreply.github.com>
* chore: split block scale example instances in more separate files to speed up compile times
* wip: fp4 scaffolding for abquant
* feat: add fp4 decoding-while-loading to abquant pipeline
* feat: add support for fp4 CPU verification in abquant
* chore: add time tracking to reference calculation
* feat: add a4w4 test for blockscale gemm
* feat: optimize reference calculation by preconverting values to AccType
* feat: add fp4 to fp8 look-up table
* fix: reference to wrong ComputeDataType field in QuantProblem
* feat: type utilities for determining MFMA compute types
* feat: packed fp4 for abquant weight preshuffle
* feat: add separate tests for a4w4 base case, padding and preshuffleB
* fix: fp4 conversion on gfx950 attempting to use non-supported method
* fix: test case was using quant group sizes which don't work on gfx950 due to larger mfma tile size
* chore: add fp4 preshuffleb mode to block scale example
* chore: sanity check for packed types being 1 byte
* chore: clarify tensor dimension indices with constants
* chore: replace traits check with specialized check for packed types
* style: some minor refactoring and cleanup
* fix: correct conversion table for FNUZ fp8
* chore: add fp4 instances to main abquant instances again
* chore: use same initialization branch for int4 and fp4
* chore: add missing initialization for fp4 in block scale gemm example
---------
Co-authored-by: Thomas Ning <Thomas.Ning@amd.com>
Add signal-based synchronization for persistent GEMM kernels where
input data becomes available incrementally. Uses modulo wraparound
(like PyTorch's AsyncMM) for chunk index calculation:
chunk_idx = ((tile_idx + tile_idx_pivot) / tiles_per_chunk) % num_chunks
Key components:
- PersistentAsyncInputScheduler struct with tiles_per_chunk_m,
chunk_signals, tile_idx_pivot_m, and num_chunks fields
- wait_eq_wave method using __builtin_amdgcn_s_sleep for power efficiency
- IsSupportedArgument validation for scheduler parameters
- Example demonstrating async input scheduling with simulated producer
- GTest unit tests covering all layout combinations
* First look at mfma / wmma unification
* Refactor
* Re-org file structure
* Restructure transform selection and WaveWiseMma class
* Update license files. Add missing gfx1151 support. Change wave size for HOST to 1. Update datatypes naming consistency
* Fixes default MmaSelector implentation
* Adds unit tests for amdgcn_mma and arch
* Consolidate common arch id checks to constexpr functions. Strongly type ids as amdgcn_target_arch_id object.
* Refactor is_any_value_of
* Fixes mma_selector logic
* Fix typo
* Add mma selector test for tile decomposition
* Fix compilation of mma.hpp
* Revert back to c++17 compatibility
* Fix compiler error by returning index_t from get_warp_size()
* Apply suggestions from code review
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* Fixes compiler error for missing is_wave32() function
* Fixes compiler error for host wave_size() should be 64
* Fixes compiler errors where __cpp_concepts is not defined
* Fixes compiler errors where __cpp_concepts is not defined
* Fix test failure for host is wave64 by default
---------
Co-authored-by: Chris Millette <you@example.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* Add indexing support to pooling operator
- Add IndexDataType template parameter to pooling problem and kernel
definitions
- Enable pooling kernel to output indices of selected elements during
max/absmax pooling
- Add overloaded operators for Max and AbsMax that track when values
change using bool changed parameter
- Support optional index buffer allocation and management in device
memory
- Modify BlockReduce2d classes to handle index tensors alongside value
tensors
- Add separate shared memory allocation for index data in cross-warp
reductions
- Create validate_pool_indices function to verify index correctness
- Modify pool3d.cpp example to demonstrate index output functionality
- Add tests for index output
* fixes
* Refactor BlockReduce2D functions to get rid auxiliary private types.
* comment resolutions and some changes to block_reduce2d
- index reference implementation improved
- reduce_operator.hpp cleanedup
- updated the block_reduce2d.hpp to have index calculation for
BlockReduce2dLinearCrossWarpSync as well
* conditionally used variable declaration improvement
- the conditionally used vairbales are used only when indexing is
enabled. To inform the compiler that they may be unused and declare them
with least size possible. This may allow it to be optimized compared to
the previous declarations
* comment resolutions
* lexical ordering of the indicies
- introduced accumulate methods that handle the intermediate steps if
needed to order the indexes
* add reduce_operator_accumulate.hpp to core.hpp
---------
Co-authored-by: Adam Osewski <Adam.Osewski@amd.com>
* Add more printing to core cktile
* Revert other changes in static encoding pattern
* Refactor to using a free print() function
* Remove loops and print just the containers
* Print tuple with better formatting, fix sequence compilation
* Add some tests for print utility
* Add print utility header
* Print for static_encoding_pattern
* add buffer_view printing
* Align vector_traits
* Fix formatting
* Lower-case enum strings
Co-authored-by: Christopher Millette <63608002+cgmillette@users.noreply.github.com>
* Remove empty comment lines
* Fix test with lower-case too
* Reduce repeated code in print tests, move helper function closer to type definition, test X&Y
* Add test_print_common.hpp
* add print.hpp in core.hpp
---------
Co-authored-by: Aviral Goel <aviral.goel@amd.com>
Co-authored-by: Christopher Millette <63608002+cgmillette@users.noreply.github.com>
Co-authored-by: Adam Osewski <19374865+aosewski@users.noreply.github.com>
* Use read_tr in universal gemm
* Enable all instances back
* Revert example37 changes
* Resolve comments
* resolve comments 2
* Fix assertion msg
* fix the gemm basic
* change index_t to bool for preshuffle variable
* Solve the comment
---------
Co-authored-by: Thomas Ning <Thomas.Ning@amd.com>
Co-authored-by: Po Yen Chen <PoYen.Chen@amd.com>
Co-authored-by: Max Podkorytov <4273004+tenpercent@users.noreply.github.com>
Co-authored-by: AviralGoelAMD <aviral.goel@amd.com>
* [draft] Add pk_fp4 and test
* Add hw conversion for fp4
* Refine test code and pk_fp4 constructor.
* fix test indent
* modify according to comment.
* fix clang-format
* modify according comments.
---------
Co-authored-by: asleepzzz <hanwen.chang@amd.com>
* add transpose load; no real logic
* fix some compile errors
* fix some issues
* update transpose load logic
* add some fixes
* fix a distribution issue
* update some codes
* add some fix
* can pass; but no logic
* transpose load enable
* update tile transpose
* miss output tile distribution mapping
* hack for transpose 16x16
* update output tensor distribution
* delete unused variables
* fix transpose related codes
* update transpose load example
* exchange the iteration order
* fix 16x16 related dimension transpose
* fix a transpose index issue
* fix a transpose index issue
* fix clang format check
* update load tile transpose related codes
* fix compile errors and pass 16x16 tests
* fix a typo
* update logic
* check other data types
* add transpose load api
* update transpose load api
* fix clang format check
* change file name
* refactor codes
* update code name
* delete some unused codes
* delete the unused oob flag for transpose load
* update tensor view api for transpose load
* update for testing
* fix a typo error
* move transpose ops to example directory
* update transpose api
* update include file
* fix for pr review
* fix compile errors
* add transpose load; no real logic
* fix some compile errors
* fix some issues
* update transpose load logic
* add some fixes
* fix a distribution issue
* update some codes
* add some fix
* can pass; but no logic
* transpose load enable
* update tile transpose
* miss output tile distribution mapping
* hack for transpose 16x16
* update output tensor distribution
* delete unused variables
* fix transpose related codes
* update transpose load example
* exchange the iteration order
* fix 16x16 related dimension transpose
* fix a transpose index issue
* fix a transpose index issue
* fix clang format check
* update load tile transpose related codes
* fix compile errors and pass 16x16 tests
* fix a typo
* update logic
* check other data types
* add transpose load api
* update transpose load api
* fix clang format check
* change file name
* refactor codes
* update code name
* delete some unused codes
* delete the unused oob flag for transpose load
* update tensor view api for transpose load
* update for testing
* fix a typo error
* move transpose ops to example directory
* update transpose api
* update include file
* fix for pr review
* fix compile errors
* change directory name
* delete the duplicated directory
* update cmakelists file
* delete the unused codes
* update function names
* update transpose policy
* update code after remod.py
* update codes
* add some comment
* Polish the instr infrastructure
* build up the fixed instr
* redesign the transpose api, currently it has numerical error
* add the bf16 transpose
* fix some issues
* add some comments
* update document
* Finished the refactor of API and pass through the verification
* fix the merging issue
---------
Co-authored-by: ThomasNing <thomas.ning@amd.com>
* window_origin variable now in base class
* abstracted more functions
* consolidated tile_window_static_distribution and tile_window_static_lengths
* clang format
* skeleton code for tile_window and tile_window_linear consolidation
* more abstraction
* moved variables from child to parent
* clang format
* removed comments
* removed debug code
* removed debug code
* abstracting traits WIP
* consolidated traits
* removed comments and clang formatted
* replace buffer load/store intrinsics with builtins
* fix clang format
* replace buffer load/store intrinsics with built-ins in ck_tile
* fix clang format
* add switch between buffer intrinsics and built-ins
* change the builtins threshold to clang20
* fix clang format
* fix some compilation errors
* revert changes in ck_tile
* revert changes in ck_tile
* delete all root files and folders when CI completes
* try changing the username in CI
* fix groovy syntax
* add user and group id info to ci dockers
* change ownership of all files in CI to jenkins at the end
* update changelog
* [CK_TILE] Add GetName functions for Gemm Kernels
* [CK_TILE] Add GetName for grouped gemm
* [CK_TILE] Add GetName for gemm - review changes
* [CK_TILE] Print also gemm problem pipeline and shape
* [CK_TILE] Print also GemmPipelineScheduler
* [CK_TILE] GetName - fixed Scheduler <<operator visibility
* [CK_TILE] GetName info adjustments
* [CK_TILE] GetName post-merge fix
* [CK_TILE] GetName - add general concat function
* [CK_TILE] GetName - small adjustments, format change
* post merge develop fix
* Remove commented code
* Merge from internal (#1857)
* enable batched_gemm_softmax_gemm_perm_wmma for gfx12
* disable instances with blocksize=256 in attention examples
* debuggging
* debug
* fixed lds_enabled
* debugging
* Fix and add limit to skiplds feature
* Enable skipLds feature and fix compilation bugs
* add ck_tile definitions for gfx12
* fix clang format and test/wmma_op
* updage instances cmake for gfx12
* disable the test_wmma_op on gfx12
* fix the builds for gfx950
* add gfx12 and gfx950 to default target list
* clean-up cmake file
* Initial introduction of OFP8 data types.
* Renamed FP8 and BF8 tests into FP8_FNUZ and BF8_FNUZ.
* Implementation of ConvertFP32Nearest in test_fp8_ocp.
* Remove dependence on possibly undeclared alias.
* Implement FP8OCP test for stochastic rounding mode.
* Implement FP8OCP tests for half_t type conversions.
* enable bf16 atomic add on gfx950
* Implement ConvertFP32Nearest test.
* Implement ConvertFP32Stochastic test.
* Implement ConvertFP16Nearest and ConvertFP16Stochastic tests.
* Refactoring. Move FP8 definitions into a separate header file.
* Enable easy switching between architectures.
* Fix compilation error for gfx942 architecture.
* Add fp4 type with constants
* only builf gfx950 branch for gfx950 target by default
* Enable OCP build of example_gemm_xdl_fp8.
* Fix formatting.
* fix the build logic for gfx950
* Improve GEMM example verbosity.
* Add constexpr where applicable.
* fix the logic of enabling XDL and WMMA instances
* Improve GEMM example verbosity.
* Enable build of example_gemm_xdl_fp8_bf8 test.
* Fix tests for gfx1101 architecture.
* Build DPP examples only on gfx103 and gfx11 architectures.
* Optionaly run either CPU or GPU verifications with GEMM examples.
* Extend GeneratorTensor_Sequential to produce values of prescribed data types.
* Add missing constructor.
* Add scale type and mxfp conversions
* Update conversions
* Add conversion tests
* Fix typo
* Improve infrastructure for OFP8 data type support.
* BUGFIX. Should not use FP8 as Compute/Accum data type.
* Add custom target for grouped_convnd_bwd_weight tests.
* Can build `tests` target on gfx950.
* Bugfixes on gfx1101 architecture.
* Fix dependencies.
* Add stochastic rounding tests
* Provide single point of truth for FP8 INF and NAN checks
* Prevent instantiation of operators that are not supported by FP8 data types
* Add FP8 type selection into client_axample CMakeLists.txt
* Prevent sccache server from shutting down during build
* Fix test success reporting logic
* Change default verification method to CPU.
GPU verification takes too much time to complete on the emulator.
* Add scale <-> float conversions
* Add scaled conversions with tests
* Add device conversions
* Make sure all tests and examples are built for gfx950
* Facilitate testing of FP8 data types on the emulator
* Introduce two new tensor generators
* Enable instances built for gfx94 to be built on gfx950
* Verify 35_splitk_gemm on floating point numbers.
splitk gemm appears to be losing precision VS reference implementation when FP numbers are involved.
* Format
* Verify 04_gemm_add_add_fastgelu on floating point numbers
* Verify 20_grouped_conv_bwd_weight on floating point numbers
* Verify 38_grouped_conv_bwd_data_multiple_d on floating point numbers
* Verify more tests on floating point data
* Fix data types and improve testing verbocity.
* Add fp4 vectors
* Add debug tests
* Upgrade to NPI 573 build docker.
* Skip on gemm_universal tests.
The tests take too long to complete on the emulator.
Need to see if it is possible to reduce the scope of the testing to just FP8 data types.
* Add new mfma instructions and examples
* Add preprocessor directives for gfx950 specific code
* Fix gfx1101 build
* Document test availability
* Re-enable fp8 gemms for gfx94/95
* Cherry-pick GEMM Universal tests for FP8 data types
* Cleanup
* Add vector types and tests
* Add check_err function
* Add tensor generators
* CK_USE_GFX94 has already been set on this branch
* Fix
* Address formatting issues and leftovers
* Make fail/pass logic consistent within 01_gemm folder
Removed multiple negations in fail/pass logic to propagate `true` as the success indicator.
* Fix GPU verification reporting logic.
* Update year in copyright notice.
* Cleanup
* Use `enum class` instead of `enum`
* Remove set_property for FP8 tests
* Add vector conversions
* Fix
* Fix linker errror
* Clean up
* Fix gfx950 conversions
* Clean up
* Fix more gfx950 conversions
* Fix even more gfx950 conversions
* Narrowing the scope of PR to OCP FP8 enablement only
* Add tests for OCP FP8 vector_type storage
* Fix client examples build
* Fix typo
* Update e8m0 casting
* Rename E8M0 type
* Update unpack method
* Cleanup merge artifacts
* Enable gemm kernel on all gfx9 architectures (#227)
* clean-up
* Implement `non_native_vector_base` with `ext_vector_type` array. (#232)
* Enable support of 1, 2, 4, and 8-byte custom types in CK.
* Fix pool tests for OCP FP8 data type
* Fix build
* Add ckProfiler gemm instances for new mfma instructions and fix ckProfiler build on MI350
* fix clang format
* Add new mfma instructions and examples
* Add preprocessor directives for gfx950 specific code
* Add ckProfiler gemm instances for new mfma instructions and fix ckProfiler build on MI350
* fix clang format
* Fix clang format for the newly merged files
* Use the existing example instances for fp16 bf16 and int8
* Remove comment on new mfma instructions in MfmaInstr
* Update include/ck/tensor_operation/gpu/grid/gridwise_batched_gemm_gemm_xdl_cshuffle_v1.hpp
Co-authored-by: Andriy Roshchenko <107577548+andriy-ca@users.noreply.github.com>
* merge from public repo
* Fix ck build
* Fix ck build
* Use double for max_abs_in_val
* Move scaled_type_convert functions to a separate header (#251)
* re-enable building mha lib and gemm_universal_f8 instances for gfx950
* Update library/src/tensor_operation_instance/gpu/CMakeLists.txt
Co-authored-by: Andriy Roshchenko <107577548+andriy-ca@users.noreply.github.com>
* fix typo for CK_USE_OCP_FP8
* fix typo for CK_USE_OCP_FP8
* Add FP6 and BF6 types (#261)
* Add a rounding flag
* Add FP6 and BF6
* Add tests
Co-authored-by: Andriy Roshchenko <107577548+andriy-ca@users.noreply.github.com>
* Clean up
---------
Co-authored-by: Andriy Roshchenko <107577548+andriy-ca@users.noreply.github.com>
* fix one more typo
* Refactor E8M0 scale implementation (#262)
* Refactor E8M0 scale implementation
* Add MXFP6 and MXBF6 conversion methods (#270)
* Add conversions
* Add tests
* Add docstrings
* Add scaled conversions
* Add fp6/bf6 tests
* Remove misleading fp4 test case
* Add docstrings
* Clean up
* Address comments
* Set stricter tolerances for RNE tests
* Add missing tests
* Add native conversions to float
* Revert "Add native conversions to float"
This reverts commit 09467111f73b753c8cc3d597533b187940353dab.
* Update copyright years
* replace the fp6 with bf6 convert calls in test_bf6
* fix test_bf6
* enable smfmac test
* [MX FP8] Add Scaled Type Convert Functions for OCP FP8/BF8 data types (#271)
* Move scaled_type_convert functions to a separate header
* Introduce MX data tests
* Build MX tests only on relevant architectures
* Refactor E8M0 scale implementation
* Fix `config.h` typo
* Cleanup deprecated symbols
* Refactor `amd_ck_fp8.hpp`
* `scaled_type_convert` for `f8_ocp_t`
* Implement test for MX FP8 scaled type convert
* Implement test for MX BF8 scaled type convert
* Scaled type convert for vectors of 2 FP8 elements
* Scaled type convert for vectors of 16 FP8 elements
* Implementation of scaled conversion from F32 to F8
* Add tests for scaled conversions from FP32 to FP8
* Add documentation to the test functions
* Implementation of scaled conversion from F32x2 to F8x2
* Implementation of scaled conversion from F32x16 to F8x16
* Implementation of scaled conversion from F32x32 to F8x32
* Implementation of scaled conversion from F8x32 to F32x32
* Verified on the emulator
* MX FP GEMM - Example Template (#277)
Temporarily uses `DeviceGemmMultiD_ABScale_Xdl_CShuffle_V3` kernel and 128x128 scaling matrices.
Must be modified to use MX-native GEMM kernell with 16 or 32 component vectors per scale.
Verified on the emulator.
* Add vector support
* Add tests
* Add missing type aliases
* Fix test naming
* only build mx example for gfx950
* disable CK_USE_AMD_MFMA_GFX950 by default
* fic build for multiple archs
* fix typo
* fix typo
* Update unpack signature
* Fix merge
* Add size checks in pack function
* Add a flag
* Add conversions
* Fix build logic
* Update pack/unpack methods
* Remove unneeded AsType accessors
* Add docstrings
* Add a flag to config file
* Test the functionality of V_MFMA_F32_16X16X128_F8F6F4 and V_MFMA_F32_32X32X64_F8F6F4 instructions. (#293)
* Introduced MFMA tests
* Verified f8f6f4 MFMA Instructions
* Move flag logic to scaled_type_convert header
* Use pointers instead of array indices
* Fix a typo
* Update tests and pack functions
* Fix gemm gemm on gfx950
* Fix clang format
* restore the default gput target lists
* fix the jenkinsfile
* add missing ifdef
---------
Co-authored-by: Jing Zhang <jizhan@amd.com>
Co-authored-by: aska-0096 <haocwang@amd.com>
Co-authored-by: Jun Liu <Liu.Jun@amd.com>
Co-authored-by: Andriy Roshchenko <andriy.roshchenko@amd.com>
Co-authored-by: Rostyslav Geyyer <rosty.geyyer@amd.com>
Co-authored-by: Rostyslav Geyyer <46627076+geyyer@users.noreply.github.com>
Co-authored-by: root <root@banff-cyxtera-s83-2.ctr.dcgpu>
Co-authored-by: Andriy Roshchenko <107577548+andriy-ca@users.noreply.github.com>
Co-authored-by: jefyang1 <146495389+jefyang1@users.noreply.github.com>
Co-authored-by: jefyang1 <Jeffreyj.Yang@amd.com>
* restore cron trigger (#1863)
* add vectorloads on non-k dim for memory pipelines (#1856)
* Support for dtypes (fp8, bf8, bf16 and fp16) for the ck_tile/03_gemm example. (#1845)
* Support bf16/fb8/bf8 datatypes for ck_tile/gemm
* remove commented out code.
* Addressing code review comments and enabling universal_gemm for all the supported data types.
* Merge conflict resolution.
* Solve the memory pipeline compilation error. Merge with the new change of CShuffle
* finish the feature, pass the tests
* Fix the pipeline and add the benchmark script for other data types
---------
Co-authored-by: ThomasNing <thomas.ning@amd.com>
* Extract prec_str and add separator to concat
* GetName add
* CK Tile - small fix to hotloop scheduler & KPack value. (#1867)
* Use SmemPack in HotLoop scheduler
* Additional debug print information
* Change KPack value.
Hardcode for now, as without AK1/BK1 there's no good way to determine
its value.
* Fix HotLoopScheduler MFMA instr parameters.
* Resolve merge issues
---------
Co-authored-by: Illia Silin <98187287+illsilin@users.noreply.github.com>
Co-authored-by: Jing Zhang <jizhan@amd.com>
Co-authored-by: aska-0096 <haocwang@amd.com>
Co-authored-by: Jun Liu <Liu.Jun@amd.com>
Co-authored-by: Andriy Roshchenko <andriy.roshchenko@amd.com>
Co-authored-by: Rostyslav Geyyer <rosty.geyyer@amd.com>
Co-authored-by: Rostyslav Geyyer <46627076+geyyer@users.noreply.github.com>
Co-authored-by: root <root@banff-cyxtera-s83-2.ctr.dcgpu>
Co-authored-by: Andriy Roshchenko <107577548+andriy-ca@users.noreply.github.com>
Co-authored-by: jefyang1 <146495389+jefyang1@users.noreply.github.com>
Co-authored-by: jefyang1 <Jeffreyj.Yang@amd.com>
Co-authored-by: jakpiase <jakub.piasecki@amd.com>
Co-authored-by: kylasa <sudhir.kylasa@amd.com>
Co-authored-by: ThomasNing <thomas.ning@amd.com>
Co-authored-by: Adam Osewski <19374865+aosewski@users.noreply.github.com>
* Refactor universal gemm policy.
* Adapt example to refactor changes.
* Introduce static encoding pattern
* Adding shuffled encoding patterns.
* Fix err in reverse tuple.
* Add transpose_tile2d
* Small refactoring + doc
* Enable reading on contiguous dimension in all layouts.
* Transpose A/B register tile if needed for comp v3 pipeline.
* Take contiguous dim size when calculating dram vector load size.
* A/B smem pack size taken from WarpGemm attributes
* Update B LDS layout and setup tile distribution pattern at class level.
* Fix static assert.
* Fix errors in examples.
* Formatting & fix IsTranspose
* Fix VectorSize & refactor.
* Add error loging messages.
* Fix VecLoadSize and TranspseC for mem pipeline.
* Update unit-tests & disable mem pipeline.
* Clang format
* Update include/ck_tile/core/tensor/tile_window.hpp
Co-authored-by: jakpiase <jakub.piasecki@amd.com>
* Fix compilation and reviewers comments.
* Refactor unit-test. Fallback to non-universal gemm.
Need to use GemmPipelineAGmemBGmemCRegV1 for now,
since GemmKernel is now supporting also non-K major vector reads.
---------
Co-authored-by: jakpiase <jakub.piasecki@amd.com>
* Update license year
* Add initial code to override decode problem
* Fix splitkv traits/args overriding error
* Reshape and transpose lse for decode
* Remove debug code
* Prettify example code
* Use better function name
* Add kMergeNumHeadGroupsSeqLenQ flag
Kernel user can use this switch to turn on/off optimization for
some problem sizes
* Add missing flag declarations
* Default turn off kMergeNumHeadGroupsSeqLenQ in codegen
* Group similar statements together
* Remove assumption of seqlen_q=1
* Remove kMergeNumHeadGroupsSeqLenQ from splitkv combine kernel
* Support kMergeNumHeadGroupsSeqLenQ=true in fmha splitkv kernel
* Run kMergeNumHeadGroupsSeqLenQ=true kernels when need
* Fix group mode block skip logics
* Undo changes of normal fwd kernel
* Update in GridSize() and using GridSize() for splitkv kernel (#1799)
---------
Co-authored-by: Qianfeng <qianfeng.zhang@amd.com>
* add prenorm/postnorm support, refactor using generate.py
* update README
* update README
* fix format
* update some description and fix format
* update format
* format
* use non-raw for loading
* format and update n4096
* dynamic-quant ready
* update readme
* support fused dynamic-quant
* update fused-quant, with smooth
* update README
* update args
* update some based on comment
* CK-Tile GEMM with memory bound pipeline.
* Memory bound gemm pipeline.
* Fix not closed namespace.
* Block gemm mem pipeline draft.
* Do not use ck_tile:: within ck_tile namespace.
* Refactoring & Move Layout info to pipeline problem.
* Get hot loop and TailNum information before lunching kernel.
* Fixes in pipeline.
* Add comment to load_tile_raw and change variable naming style.
* Few small changes & formatting.
* Do not use macro.
* Add gtests.
* Use AccDataType for Output of MFMA instruction.
* Formatting.
* Refactor gemm examples.
* Switch over to current block gemm.
* Use currently available pipeline policy.
* Refactoring and review comment.s
* Fixes after merge.
* Add missing include.
* Add load tile overload which accepts output tensor as parameter.
* This give 8% perf boost at the cost of using more registers.
* Rename example.
* Small changes.
* Fix compilation err and lower K.
* Support different layouts for A/B
* Fix vector size for different layouts.
* Rename Alignment into VectorSize
* Unblock tests.
* Add reduce2d new api
* Prevent user use cross warp reduction
* Fix bug of std caculation
* Add rmsnorm2d
* Add rmsnorm small example
* Remove static assert to prevent compile fail
* Add script to test performance and correctness
* Add missing cmake change
* refine naming
* refine example of rmsnorm
* Fix bug of rmsnorm
* Refine naming
* Fix cmake
* clang format
* Refine pipeline name
* Add add_rmsnorm2d_rdquant kernel
* Add reduce op
* host verification
* Fix bug of one pass pipeline
* Refine tile size
* Add two pass pipeline
* Rename two pass to three pass
* Fix bug of kSaveX == false
* Add instance library
* Add test script
* Fix bug of x verification
* Add save_x to trait
* Add README
* Move reduce2d into reduce folder
* Fix bug of welford when number of m warp > 1
* remove reduncant comment
* 1. move 06_rmsnorm2d to 10_rmsnorm2d
2. move 07_add_rmsnorm2d_rdquant to 11_add_rmsnorm2d_rdquant
* clang format and add missing header
* Add host validation of add + layernorm2d + rsquant
* Revert "Add host validation of add + layernorm2d + rsquant"
This reverts commit 936cb45797.
* Remove deprecated flag
* port layernorm
* change warp_welford.hpp
* Update warpshuffle
* 1. Add save mean and save std back
2. Move construction of tensor_view and tile_window to operator()
* refine welford max count calculation
* unify layernorm api
* Rename file
* Remove save mean and inv std
* Revert "refine welford max count calculation"
This reverts commit 022365802b.
* Fix order of parameter
* refine welford max count calculation again
* Remove fp32 instances
* Fix bug of padding
* refactor api
* Support bf16
* Extract common function
* Refine arg of operator()
* Add kMThreadPerBlock to template parameter
* clang format
* Refine variable name
* Refine file name
* remove redundant line
* refactor layernorm2d pipeline and add block-per-block utility
* fix name
* rename more
* add more block-per-tile instance
* remove duplicated define
* update instance for 2048, 1024 case
* support up to 2048 now
* opt loading
* add n1536
* Add two pass pipeline
* format
* Fix incorrect type
* parallel compilation
* Use smaller N
* fix 2p pass
* Support Repeat_M in distribution
* Refine nameing
* Add reduce example
---------
Co-authored-by: letaoqin <letaoqin@amd.com>
Co-authored-by: aska-0096 <haocwang@amd.com>
Co-authored-by: rocking <ChunYu.Lai@amd.com>
Co-authored-by: carlushuang <carlus.huang@amd.com>
* Add layernorm2d forward
* Refind file path
* clang format
* Exclude ck_tile op from all
* use add_executable instead
* refactor layernorm2d_fwd example
---------
Co-authored-by: carlushuang <carlus.huang@amd.com>
* enable gfx940
* switch between intrinsic mfma routines on mi100/200 and mi300
* fix mfma_int8 on MI300
* disable 2 int8 examples on MI300
* Update cmake-ck-dev.sh
* restore gitignore file
* modify Jenkinsfile to the internal repo
* Bump rocm-docs-core from 0.24.0 to 0.29.0 in /docs/sphinx
Bumps [rocm-docs-core](https://github.com/RadeonOpenCompute/rocm-docs-core) from 0.24.0 to 0.29.0.
- [Release notes](https://github.com/RadeonOpenCompute/rocm-docs-core/releases)
- [Changelog](https://github.com/RadeonOpenCompute/rocm-docs-core/blob/develop/CHANGELOG.md)
- [Commits](https://github.com/RadeonOpenCompute/rocm-docs-core/compare/v0.24.0...v0.29.0)
---
updated-dependencies:
- dependency-name: rocm-docs-core
dependency-type: direct:production
update-type: version-update:semver-minor
...
Signed-off-by: dependabot[bot] <support@github.com>
* initial enablement of gfx950
* fix clang format
* disable examples 31 and 41 int8 on gfx950
* add code
* fix build wip
* fix xx
* now can build
* naming
* minor fix
* wip fix
* fix macro for exp2; fix warpgemm a/b in transposedC
* unify as tuple_array
* Update the required Python version to 3.9
* Update executable name in test scripts
* re-structure tuple/array to avoid spill
* Merge function templates
* Fix format
* Add constraint to array<> ctor
* Re-use function
* Some minor changes
* remove wrong code in store_raw()
* fix compile issue in transpose
* Rename enum
Rename 'cood_transform_enum' to 'coord_transform_enum'
* let more integral_constant->constant, and formating
* make sure thread_buffer can be tuple/array
* temp fix buffer_store spill
* not using custom data type by default, now we can have ISA-level same code as opt_padding
* fix compile error, fp8 not ready now
* fix fp8 duplicated move/shift/and/or problem
* Default use CK_TILE_FLOAT_TO_FP8_STOCHASTIC rounding mode
* fix scratch in fp8 kernel
* update some readme
* fix merge from upstream
* sync with upstream
* sync upstream again
* sync 22
* remove unused
* fix clang-format
* update README of ck_tile example
* fix several issue
* let python version to be 3.8 as minimal
* remove ck_tile example from default cmake target like all/install/check
* remove mistake
* 1).support receipe in generate.py 2).use simplified mask type 3).change left/right to pass into karg
* fix some bug in group-mode masking and codegen. update README
* F8 quantization for FMHA forward (#1224)
* Add SAccElementFunction, PComputeElementFunction, OAccElementFunction in pipeline
* Add element function to fmha api
* Adjust P elementwise function
* Fix bug of elementwise op, our elementwise op is not inout
* Add some elementwise op, prepare to quantization
* Let generate.py can generate different elementwise function
* To prevent compiler issue, remove the elementwise function we have not used.
* Remove f8 pipeline, we should share the same pipeline even in f8
* Remove remove_cvref_t
* Avoid warning
* Fix wrong fp8 QK/KV block gemm setting
* Check fp8 rounding error in check_err()
* Set fp8 rounding error for check_err()
* Use CK_TILE_FLOAT_TO_FP8_STANDARD as default fp8 rounding mode
* 1. codgen the f8 api and kernel
2. f8 host code
* prevent warning in filter mode
* Remove not-in-use elementwise function kargs
* Remove more not-in-use elementwise function kargs
* Small refinements in C++ source files
* Use conditional_t<> to simplify code
* Support heterogeneous argument for binary function types
* Re-use already-existing scales<> functor template
* Fix wrong value produced by saturating
* Generalize the composes<> template
* Unify saturates<> implementation
* Fix type errors in composes<>
* Extend less_equal<>
* Reuse the existing template less_equal<> in check_err()
* Add equal<float> & equal<double>
* Rename check_err() parameter
* Rename check_err() parameter
* Add FIXME comment for adding new macro in future
* Remove unnecessary cast to void
* Eliminate duplicated code
* Avoid dividing api pool into more than 2 groups
* Use more clear variable names
* Use affirmative condition in if stmt
* Remove blank lines
* Donot perfect forwarding in composes<>
* To fix compile error, revert generate.py back to 4439cc107d
* Fix bug of p element function
* Add compute element op to host softmax
* Remove element function in api interface
* Extract user parameter
* Rename pscale and oscale variable
* rename f8 to fp8
* rename more f8 to fp8
* Add pipeline::operator() without element_functor
* 1. Remove deprecated pipeline enum
2. Refine host code parameter
* Use quantization range as input
* 1. Rename max_dtype to dtype_max.
2. Rename scale to scale_s
3.Add init description
* Refine description
* prevent early return
* unify _squant kernel name in cpp, update README
* Adjust the default range.
* Refine error message and bias range
* Add fp8 benchmark and smoke test
* fix fp8 swizzle_factor=4 case
---------
Co-authored-by: Po Yen Chen <PoYen.Chen@amd.com>
Co-authored-by: carlushuang <carlus.huang@amd.com>
---------
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: illsilin <Illia.Silin@amd.com>
Co-authored-by: Illia Silin <98187287+illsilin@users.noreply.github.com>
Co-authored-by: Jing Zhang <jizha@amd.com>
Co-authored-by: zjing14 <zhangjing14@gmail.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Po-Yen, Chen <PoYen.Chen@amd.com>
Co-authored-by: rocking <ChunYu.Lai@amd.com>