mirror of
https://github.com/ROCm/composable_kernel.git
synced 2026-05-11 17:00:18 +00:00
* add page_block_size parameter * add is_sglang_layout to parameters * add kv_offset_array_transform to batch async for page size 16 * add kv_last_page_lens to kernel * change kv layout to [num_total_pages, page_block_size, hdim] * format * - enable codegen of batch_prefill kernels - create new problem struct BlockFmhaBatchPrefillPipelineProblem for batch prefill kernels - generate different page sizes of batch prefill kernels (1, 16) * 1. fix wrong calculation of page id in kv_offset_array_transform in gfx950 2. support page size 1024 * fix python format * change kv cache layout to [num_blocks, num_kv_heads, head_size/x, block_size, x] and [num_blocks, num_kv_heads, block_size/X, head_size, X] * 1. Introduced `kVectorSize` in BlockFmhaBatchPrefillPipelineProblem instead of using hardcode values 2. Makes batch prefill kernel traits structures inherent from fmha fwd traits 3. Add some static check for Page size, vector size, hdim, ..., etc. * [Refactor] Replace is_sglang_layout with Enums for KV cache configuration Refactored `fmha_batch_prefill` to use `BlockAttentionKVCacheMemoryLayoutEnum` (VECTORIZED/LINEAR) and `BlockAttentionKVCacheLookupTableEnum` (SGLANG_1D/VLLM_2D) instead of a single boolean. **Changes:** * Added Enum definitions in `block_attention_kvcache_layout_enum.hpp`. * Updated Kernel, Pipeline, and Traits to template on these Enums. * Implemented `kv_offset_array_transform` logic based on `kKVMemoryLayout`. * Refactored `PageBlockTableKargs` to adapt to `kKVLookupTable`. * Updated CodeGen scripts to support new parameters. This decouples memory layout from the paging mechanism, enabling flexible KV cache configurations. * 1. remove batch prefill pipeline with sk_pad=false 2. correct some comments 3. add static assert to make sure v offsets is in same page within a tile. * fix vgpr spill count * remove unnecessary t2s functions * add fp8 support for receipt 200 and 600 in fmha_bath_prefill.py * support linear kv cache layout * Remove block_table_ptr from fwd_batch_prefill_args. Instead, reuse kv_page_indices as a pointer of the lookup table. * 1. merge multiple transforms into single transform. 2. add static check to make sure vlayout is row-major. * move FmhaFwdCommonKargs::seqlen_k_ptr to VllmPageTableKargs. * update changelog --------- Co-authored-by: ltqin <letaoqin@amd.com> Co-authored-by: PoYen, Chen <PoYen.Chen@amd.com>
8.4 KiB
8.4 KiB
Changelog for Composable Kernel
Documentation for Composable Kernel available at https://rocm.docs.amd.com/projects/composable_kernel/en/latest/.
(Unreleased) Composable Kernel 1.3.0
Added
- Added support for explicit GEMM in CK_TILE grouped convolution forward and backward weight.
- Added TF32 convolution support on gfx942 and gfx950 in CK. It could be enabled/disabled via
DTYPESof "tf32". - Added attention sink support for FMHA FWD, include qr_ks_vs, qr_async and splitkv pipelines.
- Added support for microscaling (MX) FP8/FP4 mixed data types to Flatmm pipeline.
- Added support for fp8 dynamic tensor-wise quantization of fp8 fmha fwd kernel.
- Added FP8 KV cache support for FMHA batch prefill.
- Added FMHA batch prefill kernel support for several KV cache layouts, flexible page sizes, and different lookup table configurations.
Changed
Upcoming changes
Composable Kernel 1.2.0 for ROCm 7.2.0
Added
- Added support for bf16 data type to grouped_gemm and grouped_gemm_preshuffle.
- Added Col-Col-Row-Col layout support for aquant mode in blockscale GEMM.
- Added support for mixed precision fp8 x bf8 universal GEMM and weight preshuffle GEMM.
- Added a compute async pipeline in the CK Tile universal GEMM on gfx950.
- Added support for B Tensor type
pk_int4_tin the CK Tile weight preshuffle GEMM. - Added the new api to load different memory sizes to SGPR.
- Added support for B Tensor preshuffle in CK Tile grouped GEMM.
- Added a basic copy kernel example and supporting documentation for new CK Tile developers.
- Added support for grouped GEMM kernels to perform Multi D elementwise operation.
- Added support for multiple ABD GEMM.
- Added benchmarking support for tile engine GEMM Multi D.
- Added block scaling support in CK Tile GEMM, allowing flexible use of quantization matrices from either A or B operands.
- Added the row-wise column-wise quantization for CK Tile GEMM and CK Tile grouped GEMM.
- Added support for f32 to FMHA (forward and backward).
- Added tensor-wise quantization for CK Tile GEMM.
- Added support for batched contraction kernel.
- Added WMMA (gfx12) support for FMHA.
- Added pooling kernel in CK_TILE
- Added top-k sigmoid kernel in CK_TILE
- Added the blockscale 2D support for CK_TILE GEMM.
- Added Flatmm pipeline for microscaling (MX) FP8/FP4 data types
Changed
- Removed
BlockSizeinmake_kernelandCShuffleEpilogueProblemto support Wave32 in CK Tile (#2594) - Added an optional template parameter
Arch(gfx9_t,gfx12_tetc.) tomake_kernelto support linking multiple object files that have the same kernel compiled for different architectures. - FMHA examples and tests can be built for multiple architectures (gfx9, gfx950, gfx12) at the same time.
Upcoming changes
- Composable Kernel will be adopting C++20 features in an upcoming ROCm release, updating the minimum compiler requirement to C++20. Ensure that your development environment complies with this requirement to facilitate a seamless transition.
Composable Kernel 1.1.0 for ROCm 7.1.1
Upcoming changes
- Composable Kernel will be adopting C++20 features in an upcoming ROCm release, updating the minimum compiler requirement to C++20. Ensure that your development environment complies with this requirement to facilitate a seamless transition.
Composable Kernel 1.1.0 for ROCm 7.1.0
Added
- Added support for hdim as a multiple of 32 for FMHA (fwd/fwd_splitkv/bwd)
- Added support for elementwise kernel.
Upcoming changes
- Non-grouped convolutions are deprecated. Their functionality is supported by grouped convolution.
Composable Kernel 1.1.0 for ROCm 7.0.0
Added
- Added support for bf16, f32, and f16 for 2D and 3D NGCHW grouped convolution backward data
- Added a fully asynchronous HOST (CPU) arguments copy flow for CK grouped GEMM kernels.
- Added support GKCYX layout for grouped convolution forward (NGCHW/GKCYX/NGKHW, number of instances in instance factory for NGCHW/GKYXC/NGKHW has been reduced).
- Added support for GKCYX layout for grouped convolution forward (NGCHW/GKCYX/NGKHW).
- Added support for GKCYX layout for grouped convolution backward weight (NGCHW/GKCYX/NGKHW).
- Added support for GKCYX layout for grouped convolution backward data (NGCHW/GKCYX/NGKHW).
- Added support for Stream-K version of mixed fp8/bf16 GEMM
- Added support for Multiple D GEMM
- Added GEMM pipeline for microscaling (MX) FP8/FP6/FP4 data types
- Added support for FP16 2:4 structured sparsity to universal GEMM.
- Added support for Split K for grouped convolution backward data.
- Added logit soft-capping support for fMHA forward kernels.
- Added support for hdim as a multiple of 32 for FMHA (fwd/fwd_splitkv)
- Added benchmarking support for tile engine GEMM.
- Added Ping-pong scheduler support for GEMM operation along the K dimension.
- Added rotating buffer feature for CK_Tile GEMM.
- Added int8 support for CK_TILE GEMM.
- Added CK Tile Epilogue Chainer framework for composable epilogue sequences in GEMM operations
Optimized
- Optimize the gemm multiply multiply preshuffle & lds bypass with Pack of KGroup and better instruction layout.
- Added Vectorize Transpose optimization for CK Tile
- Added the asynchronous copy for gfx950
Changed
- Removed support for gfx940 and gfx941 targets (#1944)
- Replaced the raw buffer load/store intrinsics with Clang20 built-ins (#1876)
- DL and DPP kernels are now enabled by default.
- Number of instances in instance factory for grouped convolution forward NGCHW/GKYXC/NGKHW has been reduced.
- Number of instances in instance factory for grouped convolution backward weight NGCHW/GKYXC/NGKHW has been reduced.
- Number of instances in instance factory for grouped convolution backward data NGCHW/GKYXC/NGKHW has been reduced.
Composable Kernel 1.1.0 for ROCm 6.1.0
Additions
- Added generic instances for GEMM XDL operations (#1161)
- Added gamma and beta parameters for the layernorm and groupnorm bwd operations (#1133)
- Introduced wrapper sublibrary (limited functionality). (#1071, #1098, #1108, #1126)
- Added an option to vary the number of warm-up cycles and iterations for ckProfiler (#1124)
Optimizations
- New performance optimizations for GEMM operations on MI200 and MI300 architectures (#1135)
Fixes
- Reduced the build time for most GPU architectures (#1084)
- Fixed some conversion issues for fp8 data type (#1099)
Changes
None
Known issues
None
Composable Kernel 1.1.0 for ROCm 6.0.0
Fixes
- Fixed a hazard associated with inline v_dot (#808)
- Fixed two bugs in grouped convolution backward data without K padding (#848 #876)
Optimizations
None
Additions
- Added an image to a column kernel (#867)
- Added a column to an image kernel (#930)
- Support for 3D grouped convolution on RDNA 3 GPUs (#935, #950, #985)
- Grouped convolution support for small K and C (#822 #879 #897)
- Support for NHWGC (2D and 3D) grouped convolution backward weight (#769 #804)
- Support for bf16/f32/f16 and NHWGC (2D and 3D) grouped convolution backward data (#757 #799)
- Support for Batched GEMM DL (#732)
Changes
- Changed the grouped convolution API to maintain consistency with other convolution kernels (#817)
Composable Kernel 0.2.0 for ROCm 5.7.0
Fixes
- Fixed a bug in 6-dimensional kernels (#555)
- Fixed a test case failure with grouped convolution backward weight (#524)
Optimizations
- Improved the performance of the normalization kernel
Additions
- New CMake flags:
- "DL_KERNELS"-* Must be set to "ON" in order to build the GEMM DL and batched_gemm_multi_d_dl instances
- "DTYPES" -- Can be set to any subset of "fp64;fp32;fp16;fp8;bf16;int8" to build an instance of the specified data types
- "INSTANCES_ONLY" -- Only builds CK library and instances without tests, examples, or profiler
- New feature: if GPU_TARGETS is not set in the CMake command line, CK will be built for all targets supported by the compiler
- Support for MI300A/MI300X
- Support for AMD RDNA 3
- New user tutorial (#563)
- Additional instances for irregular GEMM sizes (#560)
- New inter-wave consumer-producer programming model for GEMM kernels (#310)
- GEMM with support multiple elementwise fusions (multi-D) (#534)
- Multi-embeddings support (#542)
- AMD RDNA 3 blockwise GEMM and real GEMM support (#541)
- AMD RDNA grouped convolution backward weight support (#505)
- MaxPool and AvgPool forward (#815); MaxPool backward (#750)
Changes
None